Nov 28 11:54:16 np0005539065 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 28 11:54:16 np0005539065 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 28 11:54:16 np0005539065 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 28 11:54:16 np0005539065 kernel: BIOS-provided physical RAM map:
Nov 28 11:54:16 np0005539065 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 28 11:54:16 np0005539065 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 28 11:54:16 np0005539065 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 28 11:54:16 np0005539065 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 28 11:54:16 np0005539065 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 28 11:54:16 np0005539065 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 28 11:54:16 np0005539065 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 28 11:54:16 np0005539065 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 28 11:54:16 np0005539065 kernel: NX (Execute Disable) protection: active
Nov 28 11:54:16 np0005539065 kernel: APIC: Static calls initialized
Nov 28 11:54:16 np0005539065 kernel: SMBIOS 2.8 present.
Nov 28 11:54:16 np0005539065 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 28 11:54:16 np0005539065 kernel: Hypervisor detected: KVM
Nov 28 11:54:16 np0005539065 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 28 11:54:16 np0005539065 kernel: kvm-clock: using sched offset of 3275583097 cycles
Nov 28 11:54:16 np0005539065 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 28 11:54:16 np0005539065 kernel: tsc: Detected 2799.998 MHz processor
Nov 28 11:54:16 np0005539065 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 28 11:54:16 np0005539065 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 28 11:54:16 np0005539065 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 28 11:54:16 np0005539065 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 28 11:54:16 np0005539065 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 28 11:54:16 np0005539065 kernel: Using GB pages for direct mapping
Nov 28 11:54:16 np0005539065 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 28 11:54:16 np0005539065 kernel: ACPI: Early table checksum verification disabled
Nov 28 11:54:16 np0005539065 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 28 11:54:16 np0005539065 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 28 11:54:16 np0005539065 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 28 11:54:16 np0005539065 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 28 11:54:16 np0005539065 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 28 11:54:16 np0005539065 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 28 11:54:16 np0005539065 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 28 11:54:16 np0005539065 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 28 11:54:16 np0005539065 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 28 11:54:16 np0005539065 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 28 11:54:16 np0005539065 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 28 11:54:16 np0005539065 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 28 11:54:16 np0005539065 kernel: No NUMA configuration found
Nov 28 11:54:16 np0005539065 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 28 11:54:16 np0005539065 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 28 11:54:16 np0005539065 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 28 11:54:16 np0005539065 kernel: Zone ranges:
Nov 28 11:54:16 np0005539065 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 28 11:54:16 np0005539065 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 28 11:54:16 np0005539065 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 28 11:54:16 np0005539065 kernel:  Device   empty
Nov 28 11:54:16 np0005539065 kernel: Movable zone start for each node
Nov 28 11:54:16 np0005539065 kernel: Early memory node ranges
Nov 28 11:54:16 np0005539065 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 28 11:54:16 np0005539065 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 28 11:54:16 np0005539065 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 28 11:54:16 np0005539065 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 28 11:54:16 np0005539065 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 28 11:54:16 np0005539065 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 28 11:54:16 np0005539065 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 28 11:54:16 np0005539065 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 28 11:54:16 np0005539065 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 28 11:54:16 np0005539065 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 28 11:54:16 np0005539065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 28 11:54:16 np0005539065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 28 11:54:16 np0005539065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 28 11:54:16 np0005539065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 28 11:54:16 np0005539065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 28 11:54:16 np0005539065 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 28 11:54:16 np0005539065 kernel: TSC deadline timer available
Nov 28 11:54:16 np0005539065 kernel: CPU topo: Max. logical packages:   8
Nov 28 11:54:16 np0005539065 kernel: CPU topo: Max. logical dies:       8
Nov 28 11:54:16 np0005539065 kernel: CPU topo: Max. dies per package:   1
Nov 28 11:54:16 np0005539065 kernel: CPU topo: Max. threads per core:   1
Nov 28 11:54:16 np0005539065 kernel: CPU topo: Num. cores per package:     1
Nov 28 11:54:16 np0005539065 kernel: CPU topo: Num. threads per package:   1
Nov 28 11:54:16 np0005539065 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 28 11:54:16 np0005539065 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 28 11:54:16 np0005539065 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 28 11:54:16 np0005539065 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 28 11:54:16 np0005539065 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 28 11:54:16 np0005539065 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 28 11:54:16 np0005539065 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 28 11:54:16 np0005539065 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 28 11:54:16 np0005539065 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 28 11:54:16 np0005539065 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 28 11:54:16 np0005539065 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 28 11:54:16 np0005539065 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 28 11:54:16 np0005539065 kernel: Booting paravirtualized kernel on KVM
Nov 28 11:54:16 np0005539065 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 28 11:54:16 np0005539065 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 28 11:54:16 np0005539065 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 28 11:54:16 np0005539065 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 28 11:54:16 np0005539065 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 28 11:54:16 np0005539065 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 28 11:54:16 np0005539065 kernel: random: crng init done
Nov 28 11:54:16 np0005539065 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: Fallback order for Node 0: 0 
Nov 28 11:54:16 np0005539065 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 28 11:54:16 np0005539065 kernel: Policy zone: Normal
Nov 28 11:54:16 np0005539065 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 28 11:54:16 np0005539065 kernel: software IO TLB: area num 8.
Nov 28 11:54:16 np0005539065 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 28 11:54:16 np0005539065 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 28 11:54:16 np0005539065 kernel: ftrace: allocated 193 pages with 3 groups
Nov 28 11:54:16 np0005539065 kernel: Dynamic Preempt: voluntary
Nov 28 11:54:16 np0005539065 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 28 11:54:16 np0005539065 kernel: rcu: #011RCU event tracing is enabled.
Nov 28 11:54:16 np0005539065 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 28 11:54:16 np0005539065 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 28 11:54:16 np0005539065 kernel: #011Rude variant of Tasks RCU enabled.
Nov 28 11:54:16 np0005539065 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 28 11:54:16 np0005539065 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 28 11:54:16 np0005539065 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 28 11:54:16 np0005539065 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 28 11:54:16 np0005539065 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 28 11:54:16 np0005539065 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 28 11:54:16 np0005539065 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 28 11:54:16 np0005539065 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 28 11:54:16 np0005539065 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 28 11:54:16 np0005539065 kernel: Console: colour VGA+ 80x25
Nov 28 11:54:16 np0005539065 kernel: printk: console [ttyS0] enabled
Nov 28 11:54:16 np0005539065 kernel: ACPI: Core revision 20230331
Nov 28 11:54:16 np0005539065 kernel: APIC: Switch to symmetric I/O mode setup
Nov 28 11:54:16 np0005539065 kernel: x2apic enabled
Nov 28 11:54:16 np0005539065 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 28 11:54:16 np0005539065 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 28 11:54:16 np0005539065 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 28 11:54:16 np0005539065 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 28 11:54:16 np0005539065 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 28 11:54:16 np0005539065 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 28 11:54:16 np0005539065 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 28 11:54:16 np0005539065 kernel: Spectre V2 : Mitigation: Retpolines
Nov 28 11:54:16 np0005539065 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 28 11:54:16 np0005539065 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 28 11:54:16 np0005539065 kernel: RETBleed: Mitigation: untrained return thunk
Nov 28 11:54:16 np0005539065 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 28 11:54:16 np0005539065 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 28 11:54:16 np0005539065 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 28 11:54:16 np0005539065 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 28 11:54:16 np0005539065 kernel: x86/bugs: return thunk changed
Nov 28 11:54:16 np0005539065 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 28 11:54:16 np0005539065 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 28 11:54:16 np0005539065 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 28 11:54:16 np0005539065 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 28 11:54:16 np0005539065 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 28 11:54:16 np0005539065 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 28 11:54:16 np0005539065 kernel: Freeing SMP alternatives memory: 40K
Nov 28 11:54:16 np0005539065 kernel: pid_max: default: 32768 minimum: 301
Nov 28 11:54:16 np0005539065 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 28 11:54:16 np0005539065 kernel: landlock: Up and running.
Nov 28 11:54:16 np0005539065 kernel: Yama: becoming mindful.
Nov 28 11:54:16 np0005539065 kernel: SELinux:  Initializing.
Nov 28 11:54:16 np0005539065 kernel: LSM support for eBPF active
Nov 28 11:54:16 np0005539065 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 28 11:54:16 np0005539065 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 28 11:54:16 np0005539065 kernel: ... version:                0
Nov 28 11:54:16 np0005539065 kernel: ... bit width:              48
Nov 28 11:54:16 np0005539065 kernel: ... generic registers:      6
Nov 28 11:54:16 np0005539065 kernel: ... value mask:             0000ffffffffffff
Nov 28 11:54:16 np0005539065 kernel: ... max period:             00007fffffffffff
Nov 28 11:54:16 np0005539065 kernel: ... fixed-purpose events:   0
Nov 28 11:54:16 np0005539065 kernel: ... event mask:             000000000000003f
Nov 28 11:54:16 np0005539065 kernel: signal: max sigframe size: 1776
Nov 28 11:54:16 np0005539065 kernel: rcu: Hierarchical SRCU implementation.
Nov 28 11:54:16 np0005539065 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 28 11:54:16 np0005539065 kernel: smp: Bringing up secondary CPUs ...
Nov 28 11:54:16 np0005539065 kernel: smpboot: x86: Booting SMP configuration:
Nov 28 11:54:16 np0005539065 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 28 11:54:16 np0005539065 kernel: smp: Brought up 1 node, 8 CPUs
Nov 28 11:54:16 np0005539065 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 28 11:54:16 np0005539065 kernel: node 0 deferred pages initialised in 14ms
Nov 28 11:54:16 np0005539065 kernel: Memory: 7765996K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Nov 28 11:54:16 np0005539065 kernel: devtmpfs: initialized
Nov 28 11:54:16 np0005539065 kernel: x86/mm: Memory block size: 128MB
Nov 28 11:54:16 np0005539065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 28 11:54:16 np0005539065 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: pinctrl core: initialized pinctrl subsystem
Nov 28 11:54:16 np0005539065 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 28 11:54:16 np0005539065 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 28 11:54:16 np0005539065 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 28 11:54:16 np0005539065 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 28 11:54:16 np0005539065 kernel: audit: initializing netlink subsys (disabled)
Nov 28 11:54:16 np0005539065 kernel: audit: type=2000 audit(1764348854.484:1): state=initialized audit_enabled=0 res=1
Nov 28 11:54:16 np0005539065 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 28 11:54:16 np0005539065 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 28 11:54:16 np0005539065 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 28 11:54:16 np0005539065 kernel: cpuidle: using governor menu
Nov 28 11:54:16 np0005539065 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 28 11:54:16 np0005539065 kernel: PCI: Using configuration type 1 for base access
Nov 28 11:54:16 np0005539065 kernel: PCI: Using configuration type 1 for extended access
Nov 28 11:54:16 np0005539065 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 28 11:54:16 np0005539065 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 28 11:54:16 np0005539065 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 28 11:54:16 np0005539065 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 28 11:54:16 np0005539065 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 28 11:54:16 np0005539065 kernel: Demotion targets for Node 0: null
Nov 28 11:54:16 np0005539065 kernel: cryptd: max_cpu_qlen set to 1000
Nov 28 11:54:16 np0005539065 kernel: ACPI: Added _OSI(Module Device)
Nov 28 11:54:16 np0005539065 kernel: ACPI: Added _OSI(Processor Device)
Nov 28 11:54:16 np0005539065 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 28 11:54:16 np0005539065 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 28 11:54:16 np0005539065 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 28 11:54:16 np0005539065 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 28 11:54:16 np0005539065 kernel: ACPI: Interpreter enabled
Nov 28 11:54:16 np0005539065 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 28 11:54:16 np0005539065 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 28 11:54:16 np0005539065 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 28 11:54:16 np0005539065 kernel: PCI: Using E820 reservations for host bridge windows
Nov 28 11:54:16 np0005539065 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 28 11:54:16 np0005539065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 28 11:54:16 np0005539065 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [3] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [4] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [5] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [6] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [7] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [8] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [9] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [10] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [11] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [12] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [13] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [14] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [15] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [16] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [17] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [18] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [19] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [20] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [21] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [22] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [23] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [24] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [25] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [26] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [27] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [28] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [29] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [30] registered
Nov 28 11:54:16 np0005539065 kernel: acpiphp: Slot [31] registered
Nov 28 11:54:16 np0005539065 kernel: PCI host bridge to bus 0000:00
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 28 11:54:16 np0005539065 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 28 11:54:16 np0005539065 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 28 11:54:16 np0005539065 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 28 11:54:16 np0005539065 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 28 11:54:16 np0005539065 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 28 11:54:16 np0005539065 kernel: iommu: Default domain type: Translated
Nov 28 11:54:16 np0005539065 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 28 11:54:16 np0005539065 kernel: SCSI subsystem initialized
Nov 28 11:54:16 np0005539065 kernel: ACPI: bus type USB registered
Nov 28 11:54:16 np0005539065 kernel: usbcore: registered new interface driver usbfs
Nov 28 11:54:16 np0005539065 kernel: usbcore: registered new interface driver hub
Nov 28 11:54:16 np0005539065 kernel: usbcore: registered new device driver usb
Nov 28 11:54:16 np0005539065 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 28 11:54:16 np0005539065 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 28 11:54:16 np0005539065 kernel: PTP clock support registered
Nov 28 11:54:16 np0005539065 kernel: EDAC MC: Ver: 3.0.0
Nov 28 11:54:16 np0005539065 kernel: NetLabel: Initializing
Nov 28 11:54:16 np0005539065 kernel: NetLabel:  domain hash size = 128
Nov 28 11:54:16 np0005539065 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 28 11:54:16 np0005539065 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 28 11:54:16 np0005539065 kernel: PCI: Using ACPI for IRQ routing
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 28 11:54:16 np0005539065 kernel: vgaarb: loaded
Nov 28 11:54:16 np0005539065 kernel: clocksource: Switched to clocksource kvm-clock
Nov 28 11:54:16 np0005539065 kernel: VFS: Disk quotas dquot_6.6.0
Nov 28 11:54:16 np0005539065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 28 11:54:16 np0005539065 kernel: pnp: PnP ACPI init
Nov 28 11:54:16 np0005539065 kernel: pnp: PnP ACPI: found 5 devices
Nov 28 11:54:16 np0005539065 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 28 11:54:16 np0005539065 kernel: NET: Registered PF_INET protocol family
Nov 28 11:54:16 np0005539065 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 28 11:54:16 np0005539065 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 28 11:54:16 np0005539065 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 28 11:54:16 np0005539065 kernel: NET: Registered PF_XDP protocol family
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 28 11:54:16 np0005539065 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 28 11:54:16 np0005539065 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 28 11:54:16 np0005539065 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 75565 usecs
Nov 28 11:54:16 np0005539065 kernel: PCI: CLS 0 bytes, default 64
Nov 28 11:54:16 np0005539065 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 28 11:54:16 np0005539065 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 28 11:54:16 np0005539065 kernel: Trying to unpack rootfs image as initramfs...
Nov 28 11:54:16 np0005539065 kernel: ACPI: bus type thunderbolt registered
Nov 28 11:54:16 np0005539065 kernel: Initialise system trusted keyrings
Nov 28 11:54:16 np0005539065 kernel: Key type blacklist registered
Nov 28 11:54:16 np0005539065 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 28 11:54:16 np0005539065 kernel: zbud: loaded
Nov 28 11:54:16 np0005539065 kernel: integrity: Platform Keyring initialized
Nov 28 11:54:16 np0005539065 kernel: integrity: Machine keyring initialized
Nov 28 11:54:16 np0005539065 kernel: Freeing initrd memory: 85868K
Nov 28 11:54:16 np0005539065 kernel: NET: Registered PF_ALG protocol family
Nov 28 11:54:16 np0005539065 kernel: xor: automatically using best checksumming function   avx       
Nov 28 11:54:16 np0005539065 kernel: Key type asymmetric registered
Nov 28 11:54:16 np0005539065 kernel: Asymmetric key parser 'x509' registered
Nov 28 11:54:16 np0005539065 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 28 11:54:16 np0005539065 kernel: io scheduler mq-deadline registered
Nov 28 11:54:16 np0005539065 kernel: io scheduler kyber registered
Nov 28 11:54:16 np0005539065 kernel: io scheduler bfq registered
Nov 28 11:54:16 np0005539065 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 28 11:54:16 np0005539065 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 28 11:54:16 np0005539065 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 28 11:54:16 np0005539065 kernel: ACPI: button: Power Button [PWRF]
Nov 28 11:54:16 np0005539065 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 28 11:54:16 np0005539065 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 28 11:54:16 np0005539065 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 28 11:54:16 np0005539065 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 28 11:54:16 np0005539065 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 28 11:54:16 np0005539065 kernel: Non-volatile memory driver v1.3
Nov 28 11:54:16 np0005539065 kernel: rdac: device handler registered
Nov 28 11:54:16 np0005539065 kernel: hp_sw: device handler registered
Nov 28 11:54:16 np0005539065 kernel: emc: device handler registered
Nov 28 11:54:16 np0005539065 kernel: alua: device handler registered
Nov 28 11:54:16 np0005539065 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 28 11:54:16 np0005539065 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 28 11:54:16 np0005539065 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 28 11:54:16 np0005539065 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 28 11:54:16 np0005539065 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 28 11:54:16 np0005539065 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 28 11:54:16 np0005539065 kernel: usb usb1: Product: UHCI Host Controller
Nov 28 11:54:16 np0005539065 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 28 11:54:16 np0005539065 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 28 11:54:16 np0005539065 kernel: hub 1-0:1.0: USB hub found
Nov 28 11:54:16 np0005539065 kernel: hub 1-0:1.0: 2 ports detected
Nov 28 11:54:16 np0005539065 kernel: usbcore: registered new interface driver usbserial_generic
Nov 28 11:54:16 np0005539065 kernel: usbserial: USB Serial support registered for generic
Nov 28 11:54:16 np0005539065 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 28 11:54:16 np0005539065 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 28 11:54:16 np0005539065 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 28 11:54:16 np0005539065 kernel: mousedev: PS/2 mouse device common for all mice
Nov 28 11:54:16 np0005539065 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 28 11:54:16 np0005539065 kernel: rtc_cmos 00:04: registered as rtc0
Nov 28 11:54:16 np0005539065 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 28 11:54:16 np0005539065 kernel: rtc_cmos 00:04: setting system clock to 2025-11-28T16:54:15 UTC (1764348855)
Nov 28 11:54:16 np0005539065 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 28 11:54:16 np0005539065 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 28 11:54:16 np0005539065 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 28 11:54:16 np0005539065 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 28 11:54:16 np0005539065 kernel: usbcore: registered new interface driver usbhid
Nov 28 11:54:16 np0005539065 kernel: usbhid: USB HID core driver
Nov 28 11:54:16 np0005539065 kernel: drop_monitor: Initializing network drop monitor service
Nov 28 11:54:16 np0005539065 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 28 11:54:16 np0005539065 kernel: Initializing XFRM netlink socket
Nov 28 11:54:16 np0005539065 kernel: NET: Registered PF_INET6 protocol family
Nov 28 11:54:16 np0005539065 kernel: Segment Routing with IPv6
Nov 28 11:54:16 np0005539065 kernel: NET: Registered PF_PACKET protocol family
Nov 28 11:54:16 np0005539065 kernel: mpls_gso: MPLS GSO support
Nov 28 11:54:16 np0005539065 kernel: IPI shorthand broadcast: enabled
Nov 28 11:54:16 np0005539065 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 28 11:54:16 np0005539065 kernel: AES CTR mode by8 optimization enabled
Nov 28 11:54:16 np0005539065 kernel: sched_clock: Marking stable (1197006272, 152554509)->(1429983401, -80422620)
Nov 28 11:54:16 np0005539065 kernel: registered taskstats version 1
Nov 28 11:54:16 np0005539065 kernel: Loading compiled-in X.509 certificates
Nov 28 11:54:16 np0005539065 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 28 11:54:16 np0005539065 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 28 11:54:16 np0005539065 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 28 11:54:16 np0005539065 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 28 11:54:16 np0005539065 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 28 11:54:16 np0005539065 kernel: Demotion targets for Node 0: null
Nov 28 11:54:16 np0005539065 kernel: page_owner is disabled
Nov 28 11:54:16 np0005539065 kernel: Key type .fscrypt registered
Nov 28 11:54:16 np0005539065 kernel: Key type fscrypt-provisioning registered
Nov 28 11:54:16 np0005539065 kernel: Key type big_key registered
Nov 28 11:54:16 np0005539065 kernel: Key type encrypted registered
Nov 28 11:54:16 np0005539065 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 28 11:54:16 np0005539065 kernel: Loading compiled-in module X.509 certificates
Nov 28 11:54:16 np0005539065 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 28 11:54:16 np0005539065 kernel: ima: Allocated hash algorithm: sha256
Nov 28 11:54:16 np0005539065 kernel: ima: No architecture policies found
Nov 28 11:54:16 np0005539065 kernel: evm: Initialising EVM extended attributes:
Nov 28 11:54:16 np0005539065 kernel: evm: security.selinux
Nov 28 11:54:16 np0005539065 kernel: evm: security.SMACK64 (disabled)
Nov 28 11:54:16 np0005539065 kernel: evm: security.SMACK64EXEC (disabled)
Nov 28 11:54:16 np0005539065 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 28 11:54:16 np0005539065 kernel: evm: security.SMACK64MMAP (disabled)
Nov 28 11:54:16 np0005539065 kernel: evm: security.apparmor (disabled)
Nov 28 11:54:16 np0005539065 kernel: evm: security.ima
Nov 28 11:54:16 np0005539065 kernel: evm: security.capability
Nov 28 11:54:16 np0005539065 kernel: evm: HMAC attrs: 0x1
Nov 28 11:54:16 np0005539065 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 28 11:54:16 np0005539065 kernel: Running certificate verification RSA selftest
Nov 28 11:54:16 np0005539065 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 28 11:54:16 np0005539065 kernel: Running certificate verification ECDSA selftest
Nov 28 11:54:16 np0005539065 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 28 11:54:16 np0005539065 kernel: clk: Disabling unused clocks
Nov 28 11:54:16 np0005539065 kernel: Freeing unused decrypted memory: 2028K
Nov 28 11:54:16 np0005539065 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 28 11:54:16 np0005539065 kernel: Write protecting the kernel read-only data: 30720k
Nov 28 11:54:16 np0005539065 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 28 11:54:16 np0005539065 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 28 11:54:16 np0005539065 kernel: Run /init as init process
Nov 28 11:54:16 np0005539065 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 28 11:54:16 np0005539065 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 28 11:54:16 np0005539065 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 28 11:54:16 np0005539065 kernel: usb 1-1: Manufacturer: QEMU
Nov 28 11:54:16 np0005539065 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 28 11:54:16 np0005539065 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 28 11:54:16 np0005539065 systemd: Detected virtualization kvm.
Nov 28 11:54:16 np0005539065 systemd: Detected architecture x86-64.
Nov 28 11:54:16 np0005539065 systemd: Running in initrd.
Nov 28 11:54:16 np0005539065 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 28 11:54:16 np0005539065 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 28 11:54:16 np0005539065 systemd: No hostname configured, using default hostname.
Nov 28 11:54:16 np0005539065 systemd: Hostname set to <localhost>.
Nov 28 11:54:16 np0005539065 systemd: Initializing machine ID from VM UUID.
Nov 28 11:54:16 np0005539065 systemd: Queued start job for default target Initrd Default Target.
Nov 28 11:54:16 np0005539065 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 28 11:54:16 np0005539065 systemd: Reached target Local Encrypted Volumes.
Nov 28 11:54:16 np0005539065 systemd: Reached target Initrd /usr File System.
Nov 28 11:54:16 np0005539065 systemd: Reached target Local File Systems.
Nov 28 11:54:16 np0005539065 systemd: Reached target Path Units.
Nov 28 11:54:16 np0005539065 systemd: Reached target Slice Units.
Nov 28 11:54:16 np0005539065 systemd: Reached target Swaps.
Nov 28 11:54:16 np0005539065 systemd: Reached target Timer Units.
Nov 28 11:54:16 np0005539065 systemd: Listening on D-Bus System Message Bus Socket.
Nov 28 11:54:16 np0005539065 systemd: Listening on Journal Socket (/dev/log).
Nov 28 11:54:16 np0005539065 systemd: Listening on Journal Socket.
Nov 28 11:54:16 np0005539065 systemd: Listening on udev Control Socket.
Nov 28 11:54:16 np0005539065 systemd: Listening on udev Kernel Socket.
Nov 28 11:54:16 np0005539065 systemd: Reached target Socket Units.
Nov 28 11:54:16 np0005539065 systemd: Starting Create List of Static Device Nodes...
Nov 28 11:54:16 np0005539065 systemd: Starting Journal Service...
Nov 28 11:54:16 np0005539065 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 28 11:54:16 np0005539065 systemd: Starting Apply Kernel Variables...
Nov 28 11:54:16 np0005539065 systemd: Starting Create System Users...
Nov 28 11:54:16 np0005539065 systemd: Starting Setup Virtual Console...
Nov 28 11:54:16 np0005539065 systemd: Finished Create List of Static Device Nodes.
Nov 28 11:54:16 np0005539065 systemd: Finished Apply Kernel Variables.
Nov 28 11:54:16 np0005539065 systemd: Finished Create System Users.
Nov 28 11:54:16 np0005539065 systemd-journald[302]: Journal started
Nov 28 11:54:16 np0005539065 systemd-journald[302]: Runtime Journal (/run/log/journal/23602de7dd9c46ae9cbaa45f7911b9d9) is 8.0M, max 153.6M, 145.6M free.
Nov 28 11:54:16 np0005539065 systemd-sysusers[307]: Creating group 'users' with GID 100.
Nov 28 11:54:16 np0005539065 systemd-sysusers[307]: Creating group 'dbus' with GID 81.
Nov 28 11:54:16 np0005539065 systemd-sysusers[307]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 28 11:54:16 np0005539065 systemd: Starting Create Static Device Nodes in /dev...
Nov 28 11:54:16 np0005539065 systemd: Started Journal Service.
Nov 28 11:54:16 np0005539065 systemd[1]: Starting Create Volatile Files and Directories...
Nov 28 11:54:16 np0005539065 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 28 11:54:16 np0005539065 systemd[1]: Finished Create Volatile Files and Directories.
Nov 28 11:54:16 np0005539065 systemd[1]: Finished Setup Virtual Console.
Nov 28 11:54:16 np0005539065 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 28 11:54:16 np0005539065 systemd[1]: Starting dracut cmdline hook...
Nov 28 11:54:16 np0005539065 dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Nov 28 11:54:16 np0005539065 dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 28 11:54:16 np0005539065 systemd[1]: Finished dracut cmdline hook.
Nov 28 11:54:16 np0005539065 systemd[1]: Starting dracut pre-udev hook...
Nov 28 11:54:16 np0005539065 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 28 11:54:16 np0005539065 kernel: device-mapper: uevent: version 1.0.3
Nov 28 11:54:16 np0005539065 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 28 11:54:16 np0005539065 kernel: RPC: Registered named UNIX socket transport module.
Nov 28 11:54:16 np0005539065 kernel: RPC: Registered udp transport module.
Nov 28 11:54:16 np0005539065 kernel: RPC: Registered tcp transport module.
Nov 28 11:54:16 np0005539065 kernel: RPC: Registered tcp-with-tls transport module.
Nov 28 11:54:16 np0005539065 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 28 11:54:16 np0005539065 rpc.statd[442]: Version 2.5.4 starting
Nov 28 11:54:16 np0005539065 rpc.statd[442]: Initializing NSM state
Nov 28 11:54:16 np0005539065 rpc.idmapd[447]: Setting log level to 0
Nov 28 11:54:16 np0005539065 systemd[1]: Finished dracut pre-udev hook.
Nov 28 11:54:16 np0005539065 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 28 11:54:16 np0005539065 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Nov 28 11:54:16 np0005539065 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 28 11:54:16 np0005539065 systemd[1]: Starting dracut pre-trigger hook...
Nov 28 11:54:16 np0005539065 systemd[1]: Finished dracut pre-trigger hook.
Nov 28 11:54:16 np0005539065 systemd[1]: Starting Coldplug All udev Devices...
Nov 28 11:54:16 np0005539065 systemd[1]: Created slice Slice /system/modprobe.
Nov 28 11:54:16 np0005539065 systemd[1]: Starting Load Kernel Module configfs...
Nov 28 11:54:16 np0005539065 systemd[1]: Finished Coldplug All udev Devices.
Nov 28 11:54:16 np0005539065 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 28 11:54:16 np0005539065 systemd[1]: Finished Load Kernel Module configfs.
Nov 28 11:54:16 np0005539065 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 28 11:54:16 np0005539065 systemd[1]: Reached target Network.
Nov 28 11:54:16 np0005539065 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 28 11:54:16 np0005539065 systemd[1]: Starting dracut initqueue hook...
Nov 28 11:54:16 np0005539065 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 28 11:54:17 np0005539065 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 28 11:54:17 np0005539065 kernel: vda: vda1
Nov 28 11:54:17 np0005539065 kernel: scsi host0: ata_piix
Nov 28 11:54:17 np0005539065 kernel: scsi host1: ata_piix
Nov 28 11:54:17 np0005539065 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 28 11:54:17 np0005539065 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 28 11:54:17 np0005539065 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 28 11:54:17 np0005539065 systemd[1]: Reached target Initrd Root Device.
Nov 28 11:54:17 np0005539065 systemd[1]: Mounting Kernel Configuration File System...
Nov 28 11:54:17 np0005539065 systemd[1]: Mounted Kernel Configuration File System.
Nov 28 11:54:17 np0005539065 systemd[1]: Reached target System Initialization.
Nov 28 11:54:17 np0005539065 systemd[1]: Reached target Basic System.
Nov 28 11:54:17 np0005539065 kernel: ata1: found unknown device (class 0)
Nov 28 11:54:17 np0005539065 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 28 11:54:17 np0005539065 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 28 11:54:17 np0005539065 systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 11:54:17 np0005539065 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 28 11:54:17 np0005539065 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 28 11:54:17 np0005539065 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 28 11:54:17 np0005539065 systemd[1]: Finished dracut initqueue hook.
Nov 28 11:54:17 np0005539065 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 28 11:54:17 np0005539065 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 28 11:54:17 np0005539065 systemd[1]: Reached target Remote File Systems.
Nov 28 11:54:17 np0005539065 systemd[1]: Starting dracut pre-mount hook...
Nov 28 11:54:17 np0005539065 systemd[1]: Finished dracut pre-mount hook.
Nov 28 11:54:17 np0005539065 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 28 11:54:17 np0005539065 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Nov 28 11:54:17 np0005539065 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 28 11:54:17 np0005539065 systemd[1]: Mounting /sysroot...
Nov 28 11:54:17 np0005539065 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 28 11:54:17 np0005539065 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 28 11:54:18 np0005539065 kernel: XFS (vda1): Ending clean mount
Nov 28 11:54:18 np0005539065 systemd[1]: Mounted /sysroot.
Nov 28 11:54:18 np0005539065 systemd[1]: Reached target Initrd Root File System.
Nov 28 11:54:18 np0005539065 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 28 11:54:18 np0005539065 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 28 11:54:18 np0005539065 systemd[1]: Reached target Initrd File Systems.
Nov 28 11:54:18 np0005539065 systemd[1]: Reached target Initrd Default Target.
Nov 28 11:54:18 np0005539065 systemd[1]: Starting dracut mount hook...
Nov 28 11:54:18 np0005539065 systemd[1]: Finished dracut mount hook.
Nov 28 11:54:18 np0005539065 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 28 11:54:18 np0005539065 rpc.idmapd[447]: exiting on signal 15
Nov 28 11:54:18 np0005539065 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 28 11:54:18 np0005539065 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Network.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Timer Units.
Nov 28 11:54:18 np0005539065 systemd[1]: dbus.socket: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 28 11:54:18 np0005539065 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Initrd Default Target.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Basic System.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Initrd Root Device.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Initrd /usr File System.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Path Units.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Remote File Systems.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Slice Units.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Socket Units.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target System Initialization.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Local File Systems.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Swaps.
Nov 28 11:54:18 np0005539065 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped dracut mount hook.
Nov 28 11:54:18 np0005539065 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped dracut pre-mount hook.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 28 11:54:18 np0005539065 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 28 11:54:18 np0005539065 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped dracut initqueue hook.
Nov 28 11:54:18 np0005539065 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped Apply Kernel Variables.
Nov 28 11:54:18 np0005539065 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 28 11:54:18 np0005539065 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped Coldplug All udev Devices.
Nov 28 11:54:18 np0005539065 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped dracut pre-trigger hook.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 28 11:54:18 np0005539065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped Setup Virtual Console.
Nov 28 11:54:18 np0005539065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 28 11:54:18 np0005539065 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 28 11:54:18 np0005539065 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Closed udev Control Socket.
Nov 28 11:54:18 np0005539065 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Closed udev Kernel Socket.
Nov 28 11:54:18 np0005539065 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped dracut pre-udev hook.
Nov 28 11:54:18 np0005539065 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped dracut cmdline hook.
Nov 28 11:54:18 np0005539065 systemd[1]: Starting Cleanup udev Database...
Nov 28 11:54:18 np0005539065 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 28 11:54:18 np0005539065 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 28 11:54:18 np0005539065 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Stopped Create System Users.
Nov 28 11:54:18 np0005539065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 28 11:54:18 np0005539065 systemd[1]: Finished Cleanup udev Database.
Nov 28 11:54:18 np0005539065 systemd[1]: Reached target Switch Root.
Nov 28 11:54:18 np0005539065 systemd[1]: Starting Switch Root...
Nov 28 11:54:18 np0005539065 systemd[1]: Switching root.
Nov 28 11:54:18 np0005539065 systemd-journald[302]: Journal stopped
Nov 28 11:54:19 np0005539065 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 28 11:54:19 np0005539065 kernel: audit: type=1404 audit(1764348858.462:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 28 11:54:19 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 11:54:19 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 11:54:19 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 11:54:19 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 11:54:19 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 11:54:19 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 11:54:19 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 11:54:19 np0005539065 kernel: audit: type=1403 audit(1764348858.606:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 28 11:54:19 np0005539065 systemd: Successfully loaded SELinux policy in 148.478ms.
Nov 28 11:54:19 np0005539065 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.440ms.
Nov 28 11:54:19 np0005539065 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 28 11:54:19 np0005539065 systemd: Detected virtualization kvm.
Nov 28 11:54:19 np0005539065 systemd: Detected architecture x86-64.
Nov 28 11:54:19 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 11:54:19 np0005539065 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 28 11:54:19 np0005539065 systemd: Stopped Switch Root.
Nov 28 11:54:19 np0005539065 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 28 11:54:19 np0005539065 systemd: Created slice Slice /system/getty.
Nov 28 11:54:19 np0005539065 systemd: Created slice Slice /system/serial-getty.
Nov 28 11:54:19 np0005539065 systemd: Created slice Slice /system/sshd-keygen.
Nov 28 11:54:19 np0005539065 systemd: Created slice User and Session Slice.
Nov 28 11:54:19 np0005539065 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 28 11:54:19 np0005539065 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 28 11:54:19 np0005539065 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 28 11:54:19 np0005539065 systemd: Reached target Local Encrypted Volumes.
Nov 28 11:54:19 np0005539065 systemd: Stopped target Switch Root.
Nov 28 11:54:19 np0005539065 systemd: Stopped target Initrd File Systems.
Nov 28 11:54:19 np0005539065 systemd: Stopped target Initrd Root File System.
Nov 28 11:54:19 np0005539065 systemd: Reached target Local Integrity Protected Volumes.
Nov 28 11:54:19 np0005539065 systemd: Reached target Path Units.
Nov 28 11:54:19 np0005539065 systemd: Reached target rpc_pipefs.target.
Nov 28 11:54:19 np0005539065 systemd: Reached target Slice Units.
Nov 28 11:54:19 np0005539065 systemd: Reached target Swaps.
Nov 28 11:54:19 np0005539065 systemd: Reached target Local Verity Protected Volumes.
Nov 28 11:54:19 np0005539065 systemd: Listening on RPCbind Server Activation Socket.
Nov 28 11:54:19 np0005539065 systemd: Reached target RPC Port Mapper.
Nov 28 11:54:19 np0005539065 systemd: Listening on Process Core Dump Socket.
Nov 28 11:54:19 np0005539065 systemd: Listening on initctl Compatibility Named Pipe.
Nov 28 11:54:19 np0005539065 systemd: Listening on udev Control Socket.
Nov 28 11:54:19 np0005539065 systemd: Listening on udev Kernel Socket.
Nov 28 11:54:19 np0005539065 systemd: Mounting Huge Pages File System...
Nov 28 11:54:19 np0005539065 systemd: Mounting POSIX Message Queue File System...
Nov 28 11:54:19 np0005539065 systemd: Mounting Kernel Debug File System...
Nov 28 11:54:19 np0005539065 systemd: Mounting Kernel Trace File System...
Nov 28 11:54:19 np0005539065 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 28 11:54:19 np0005539065 systemd: Starting Create List of Static Device Nodes...
Nov 28 11:54:19 np0005539065 systemd: Starting Load Kernel Module configfs...
Nov 28 11:54:19 np0005539065 systemd: Starting Load Kernel Module drm...
Nov 28 11:54:19 np0005539065 systemd: Starting Load Kernel Module efi_pstore...
Nov 28 11:54:19 np0005539065 systemd: Starting Load Kernel Module fuse...
Nov 28 11:54:19 np0005539065 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 28 11:54:19 np0005539065 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 28 11:54:19 np0005539065 systemd: Stopped File System Check on Root Device.
Nov 28 11:54:19 np0005539065 systemd: Stopped Journal Service.
Nov 28 11:54:19 np0005539065 systemd: Starting Journal Service...
Nov 28 11:54:19 np0005539065 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 28 11:54:19 np0005539065 systemd: Starting Generate network units from Kernel command line...
Nov 28 11:54:19 np0005539065 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 28 11:54:19 np0005539065 systemd: Starting Remount Root and Kernel File Systems...
Nov 28 11:54:19 np0005539065 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 28 11:54:19 np0005539065 systemd: Starting Apply Kernel Variables...
Nov 28 11:54:19 np0005539065 kernel: fuse: init (API version 7.37)
Nov 28 11:54:19 np0005539065 systemd: Starting Coldplug All udev Devices...
Nov 28 11:54:19 np0005539065 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 28 11:54:19 np0005539065 systemd: Mounted Huge Pages File System.
Nov 28 11:54:19 np0005539065 systemd: Mounted POSIX Message Queue File System.
Nov 28 11:54:19 np0005539065 systemd: Mounted Kernel Debug File System.
Nov 28 11:54:19 np0005539065 systemd: Mounted Kernel Trace File System.
Nov 28 11:54:19 np0005539065 systemd: Finished Create List of Static Device Nodes.
Nov 28 11:54:19 np0005539065 systemd: modprobe@configfs.service: Deactivated successfully.
Nov 28 11:54:19 np0005539065 systemd: Finished Load Kernel Module configfs.
Nov 28 11:54:19 np0005539065 systemd-journald[676]: Journal started
Nov 28 11:54:19 np0005539065 systemd-journald[676]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 28 11:54:19 np0005539065 systemd[1]: Queued start job for default target Multi-User System.
Nov 28 11:54:19 np0005539065 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 28 11:54:19 np0005539065 systemd: Started Journal Service.
Nov 28 11:54:19 np0005539065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 28 11:54:19 np0005539065 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Load Kernel Module fuse.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 28 11:54:19 np0005539065 kernel: ACPI: bus type drm_connector registered
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Generate network units from Kernel command line.
Nov 28 11:54:19 np0005539065 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Load Kernel Module drm.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Apply Kernel Variables.
Nov 28 11:54:19 np0005539065 systemd[1]: Mounting FUSE Control File System...
Nov 28 11:54:19 np0005539065 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Rebuild Hardware Database...
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 28 11:54:19 np0005539065 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Load/Save OS Random Seed...
Nov 28 11:54:19 np0005539065 systemd-journald[676]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Create System Users...
Nov 28 11:54:19 np0005539065 systemd-journald[676]: Received client request to flush runtime journal.
Nov 28 11:54:19 np0005539065 systemd[1]: Mounted FUSE Control File System.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Load/Save OS Random Seed.
Nov 28 11:54:19 np0005539065 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Create System Users.
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Coldplug All udev Devices.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 28 11:54:19 np0005539065 systemd[1]: Reached target Preparation for Local File Systems.
Nov 28 11:54:19 np0005539065 systemd[1]: Reached target Local File Systems.
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 28 11:54:19 np0005539065 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 28 11:54:19 np0005539065 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 28 11:54:19 np0005539065 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Automatic Boot Loader Update...
Nov 28 11:54:19 np0005539065 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Create Volatile Files and Directories...
Nov 28 11:54:19 np0005539065 bootctl[694]: Couldn't find EFI system partition, skipping.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Automatic Boot Loader Update.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Create Volatile Files and Directories.
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Security Auditing Service...
Nov 28 11:54:19 np0005539065 systemd[1]: Starting RPC Bind...
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Rebuild Journal Catalog...
Nov 28 11:54:19 np0005539065 auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 28 11:54:19 np0005539065 auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Rebuild Journal Catalog.
Nov 28 11:54:19 np0005539065 systemd[1]: Started RPC Bind.
Nov 28 11:54:19 np0005539065 augenrules[705]: /sbin/augenrules: No change
Nov 28 11:54:19 np0005539065 augenrules[721]: No rules
Nov 28 11:54:19 np0005539065 augenrules[721]: enabled 1
Nov 28 11:54:19 np0005539065 augenrules[721]: failure 1
Nov 28 11:54:19 np0005539065 augenrules[721]: pid 700
Nov 28 11:54:19 np0005539065 augenrules[721]: rate_limit 0
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog_limit 8192
Nov 28 11:54:19 np0005539065 augenrules[721]: lost 0
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog 4
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog_wait_time 60000
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog_wait_time_actual 0
Nov 28 11:54:19 np0005539065 augenrules[721]: enabled 1
Nov 28 11:54:19 np0005539065 augenrules[721]: failure 1
Nov 28 11:54:19 np0005539065 augenrules[721]: pid 700
Nov 28 11:54:19 np0005539065 augenrules[721]: rate_limit 0
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog_limit 8192
Nov 28 11:54:19 np0005539065 augenrules[721]: lost 0
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog 4
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog_wait_time 60000
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog_wait_time_actual 0
Nov 28 11:54:19 np0005539065 augenrules[721]: enabled 1
Nov 28 11:54:19 np0005539065 augenrules[721]: failure 1
Nov 28 11:54:19 np0005539065 augenrules[721]: pid 700
Nov 28 11:54:19 np0005539065 augenrules[721]: rate_limit 0
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog_limit 8192
Nov 28 11:54:19 np0005539065 augenrules[721]: lost 0
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog 4
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog_wait_time 60000
Nov 28 11:54:19 np0005539065 augenrules[721]: backlog_wait_time_actual 0
Nov 28 11:54:19 np0005539065 systemd[1]: Started Security Auditing Service.
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Rebuild Hardware Database.
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Update is Completed...
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Update is Completed.
Nov 28 11:54:19 np0005539065 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Nov 28 11:54:19 np0005539065 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 28 11:54:19 np0005539065 systemd[1]: Reached target System Initialization.
Nov 28 11:54:19 np0005539065 systemd[1]: Started dnf makecache --timer.
Nov 28 11:54:19 np0005539065 systemd[1]: Started Daily rotation of log files.
Nov 28 11:54:19 np0005539065 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 28 11:54:19 np0005539065 systemd[1]: Reached target Timer Units.
Nov 28 11:54:19 np0005539065 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 28 11:54:19 np0005539065 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 28 11:54:19 np0005539065 systemd[1]: Reached target Socket Units.
Nov 28 11:54:19 np0005539065 systemd[1]: Starting D-Bus System Message Bus...
Nov 28 11:54:19 np0005539065 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 28 11:54:19 np0005539065 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Load Kernel Module configfs...
Nov 28 11:54:19 np0005539065 systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 11:54:19 np0005539065 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Load Kernel Module configfs.
Nov 28 11:54:19 np0005539065 systemd[1]: Started D-Bus System Message Bus.
Nov 28 11:54:19 np0005539065 systemd[1]: Reached target Basic System.
Nov 28 11:54:19 np0005539065 dbus-broker-lau[758]: Ready
Nov 28 11:54:19 np0005539065 systemd[1]: Starting NTP client/server...
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 28 11:54:19 np0005539065 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 28 11:54:19 np0005539065 systemd[1]: Starting IPv4 firewall with iptables...
Nov 28 11:54:19 np0005539065 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 28 11:54:19 np0005539065 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 28 11:54:19 np0005539065 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 28 11:54:19 np0005539065 systemd[1]: Started irqbalance daemon.
Nov 28 11:54:19 np0005539065 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 28 11:54:19 np0005539065 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 28 11:54:19 np0005539065 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 28 11:54:19 np0005539065 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 28 11:54:19 np0005539065 systemd[1]: Reached target sshd-keygen.target.
Nov 28 11:54:19 np0005539065 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 28 11:54:19 np0005539065 systemd[1]: Reached target User and Group Name Lookups.
Nov 28 11:54:19 np0005539065 systemd[1]: Starting User Login Management...
Nov 28 11:54:19 np0005539065 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 28 11:54:19 np0005539065 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 28 11:54:19 np0005539065 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 28 11:54:19 np0005539065 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 28 11:54:19 np0005539065 chronyd[793]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 28 11:54:19 np0005539065 chronyd[793]: Loaded 0 symmetric keys
Nov 28 11:54:19 np0005539065 chronyd[793]: Using right/UTC timezone to obtain leap second data
Nov 28 11:54:19 np0005539065 chronyd[793]: Loaded seccomp filter (level 2)
Nov 28 11:54:19 np0005539065 kernel: Console: switching to colour dummy device 80x25
Nov 28 11:54:19 np0005539065 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 28 11:54:19 np0005539065 kernel: [drm] features: -context_init
Nov 28 11:54:20 np0005539065 systemd[1]: Started NTP client/server.
Nov 28 11:54:20 np0005539065 kernel: [drm] number of scanouts: 1
Nov 28 11:54:20 np0005539065 kernel: [drm] number of cap sets: 0
Nov 28 11:54:20 np0005539065 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 28 11:54:20 np0005539065 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 28 11:54:20 np0005539065 kernel: Console: switching to colour frame buffer device 128x48
Nov 28 11:54:20 np0005539065 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 28 11:54:20 np0005539065 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 28 11:54:20 np0005539065 systemd-logind[790]: New seat seat0.
Nov 28 11:54:20 np0005539065 systemd-logind[790]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 28 11:54:20 np0005539065 systemd-logind[790]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 28 11:54:20 np0005539065 systemd[1]: Started User Login Management.
Nov 28 11:54:20 np0005539065 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 28 11:54:20 np0005539065 kernel: kvm_amd: TSC scaling supported
Nov 28 11:54:20 np0005539065 kernel: kvm_amd: Nested Virtualization enabled
Nov 28 11:54:20 np0005539065 kernel: kvm_amd: Nested Paging enabled
Nov 28 11:54:20 np0005539065 kernel: kvm_amd: LBR virtualization supported
Nov 28 11:54:20 np0005539065 iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Nov 28 11:54:20 np0005539065 systemd[1]: Finished IPv4 firewall with iptables.
Nov 28 11:54:20 np0005539065 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Fri, 28 Nov 2025 16:54:20 +0000. Up 6.08 seconds.
Nov 28 11:54:20 np0005539065 systemd[1]: run-cloud\x2dinit-tmp-tmpuvvghk_4.mount: Deactivated successfully.
Nov 28 11:54:20 np0005539065 systemd[1]: Starting Hostname Service...
Nov 28 11:54:20 np0005539065 systemd[1]: Started Hostname Service.
Nov 28 11:54:20 np0005539065 systemd-hostnamed[853]: Hostname set to <np0005539065.novalocal> (static)
Nov 28 11:54:20 np0005539065 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 28 11:54:20 np0005539065 systemd[1]: Reached target Preparation for Network.
Nov 28 11:54:20 np0005539065 systemd[1]: Starting Network Manager...
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9288] NetworkManager (version 1.54.1-1.el9) is starting... (boot:689ffb1a-47b1-4ef9-97d1-e98882930650)
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9292] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9356] manager[0x556cdc1a8080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9398] hostname: hostname: using hostnamed
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9398] hostname: static hostname changed from (none) to "np0005539065.novalocal"
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9402] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9502] manager[0x556cdc1a8080]: rfkill: Wi-Fi hardware radio set enabled
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9503] manager[0x556cdc1a8080]: rfkill: WWAN hardware radio set enabled
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9543] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9544] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 28 11:54:20 np0005539065 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9544] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9547] manager: Networking is enabled by state file
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9549] settings: Loaded settings plugin: keyfile (internal)
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9561] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9582] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9593] dhcp: init: Using DHCP client 'internal'
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9595] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9606] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9615] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9622] device (lo): Activation: starting connection 'lo' (ebd0c5b7-fd31-4dc9-bad3-b5977a867d53)
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9629] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9631] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9656] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9660] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9662] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9664] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9665] device (eth0): carrier: link connected
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9668] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9673] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9678] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9680] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9681] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9682] manager: NetworkManager state is now CONNECTING
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9683] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9687] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9690] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 28 11:54:20 np0005539065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 28 11:54:20 np0005539065 systemd[1]: Started Network Manager.
Nov 28 11:54:20 np0005539065 systemd[1]: Reached target Network.
Nov 28 11:54:20 np0005539065 systemd[1]: Starting Network Manager Wait Online...
Nov 28 11:54:20 np0005539065 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 28 11:54:20 np0005539065 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9908] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9910] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 28 11:54:20 np0005539065 NetworkManager[857]: <info>  [1764348860.9916] device (lo): Activation: successful, device activated.
Nov 28 11:54:20 np0005539065 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 28 11:54:20 np0005539065 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 28 11:54:20 np0005539065 systemd[1]: Reached target NFS client services.
Nov 28 11:54:20 np0005539065 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 28 11:54:21 np0005539065 systemd[1]: Reached target Remote File Systems.
Nov 28 11:54:21 np0005539065 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 28 11:54:23 np0005539065 NetworkManager[857]: <info>  [1764348863.0204] dhcp4 (eth0): state changed new lease, address=38.129.56.33
Nov 28 11:54:23 np0005539065 NetworkManager[857]: <info>  [1764348863.0218] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 28 11:54:23 np0005539065 NetworkManager[857]: <info>  [1764348863.0247] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 11:54:23 np0005539065 NetworkManager[857]: <info>  [1764348863.0286] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 11:54:23 np0005539065 NetworkManager[857]: <info>  [1764348863.0288] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 11:54:23 np0005539065 NetworkManager[857]: <info>  [1764348863.0293] manager: NetworkManager state is now CONNECTED_SITE
Nov 28 11:54:23 np0005539065 NetworkManager[857]: <info>  [1764348863.0297] device (eth0): Activation: successful, device activated.
Nov 28 11:54:23 np0005539065 NetworkManager[857]: <info>  [1764348863.0304] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 28 11:54:23 np0005539065 NetworkManager[857]: <info>  [1764348863.0308] manager: startup complete
Nov 28 11:54:23 np0005539065 systemd[1]: Finished Network Manager Wait Online.
Nov 28 11:54:23 np0005539065 systemd[1]: Starting Cloud-init: Network Stage...
Nov 28 11:54:23 np0005539065 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Fri, 28 Nov 2025 16:54:23 +0000. Up 8.98 seconds.
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: |  eth0  | True |         38.129.56.33         | 255.255.255.0 | global | fa:16:3e:07:c5:7a |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe07:c57a/64 |       .       |  link  | fa:16:3e:07:c5:7a |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Nov 28 11:54:23 np0005539065 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 28 11:54:24 np0005539065 cloud-init[921]: Generating public/private rsa key pair.
Nov 28 11:54:24 np0005539065 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 28 11:54:24 np0005539065 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 28 11:54:24 np0005539065 cloud-init[921]: The key fingerprint is:
Nov 28 11:54:24 np0005539065 cloud-init[921]: SHA256:xMER2QvG1eYUW5viavH/OexTwCY4q76RPE9bFg9wjqI root@np0005539065.novalocal
Nov 28 11:54:24 np0005539065 cloud-init[921]: The key's randomart image is:
Nov 28 11:54:24 np0005539065 cloud-init[921]: +---[RSA 3072]----+
Nov 28 11:54:24 np0005539065 cloud-init[921]: |       o+*.....  |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |       .*.. +o o |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |       .o..*+.o  |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |       .  +*o.+  |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |        S oo=o . |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |       o o.+ +  .|
Nov 28 11:54:24 np0005539065 cloud-init[921]: |      E =.+ + o .|
Nov 28 11:54:24 np0005539065 cloud-init[921]: |        .* + . +.|
Nov 28 11:54:24 np0005539065 cloud-init[921]: |       .o.o   o++|
Nov 28 11:54:24 np0005539065 cloud-init[921]: +----[SHA256]-----+
Nov 28 11:54:24 np0005539065 cloud-init[921]: Generating public/private ecdsa key pair.
Nov 28 11:54:24 np0005539065 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 28 11:54:24 np0005539065 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 28 11:54:24 np0005539065 cloud-init[921]: The key fingerprint is:
Nov 28 11:54:24 np0005539065 cloud-init[921]: SHA256:3rpJ+umSPsoKoD1HiCwbR4hZ4wQuUBSmlWgee9zaMVE root@np0005539065.novalocal
Nov 28 11:54:24 np0005539065 cloud-init[921]: The key's randomart image is:
Nov 28 11:54:24 np0005539065 cloud-init[921]: +---[ECDSA 256]---+
Nov 28 11:54:24 np0005539065 cloud-init[921]: |o=Xo  .E         |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |=%.. .           |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |O.* . .          |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |o= + +           |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |=.+ + o S        |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |+= o . . .       |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |+ o .  .o .      |
Nov 28 11:54:24 np0005539065 cloud-init[921]: | . +  +o +       |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |  ..ooo=B.       |
Nov 28 11:54:24 np0005539065 cloud-init[921]: +----[SHA256]-----+
Nov 28 11:54:24 np0005539065 cloud-init[921]: Generating public/private ed25519 key pair.
Nov 28 11:54:24 np0005539065 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 28 11:54:24 np0005539065 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 28 11:54:24 np0005539065 cloud-init[921]: The key fingerprint is:
Nov 28 11:54:24 np0005539065 cloud-init[921]: SHA256:vzV9eTpNFl6+xnpo+XqG0Y2a7mm/JNOnA87fkOKAfD4 root@np0005539065.novalocal
Nov 28 11:54:24 np0005539065 cloud-init[921]: The key's randomart image is:
Nov 28 11:54:24 np0005539065 cloud-init[921]: +--[ED25519 256]--+
Nov 28 11:54:24 np0005539065 cloud-init[921]: |                 |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |                 |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |                 |
Nov 28 11:54:24 np0005539065 cloud-init[921]: |               ..|
Nov 28 11:54:24 np0005539065 cloud-init[921]: |        S     oo+|
Nov 28 11:54:24 np0005539065 cloud-init[921]: |       . o  .+.+*|
Nov 28 11:54:24 np0005539065 cloud-init[921]: |        o +o*+%==|
Nov 28 11:54:24 np0005539065 cloud-init[921]: |         oE=*%o#o|
Nov 28 11:54:24 np0005539065 cloud-init[921]: |          o=*=#=.|
Nov 28 11:54:24 np0005539065 cloud-init[921]: +----[SHA256]-----+
Nov 28 11:54:24 np0005539065 systemd[1]: Finished Cloud-init: Network Stage.
Nov 28 11:54:24 np0005539065 sm-notify[1005]: Version 2.5.4 starting
Nov 28 11:54:24 np0005539065 systemd[1]: Reached target Cloud-config availability.
Nov 28 11:54:24 np0005539065 systemd[1]: Reached target Network is Online.
Nov 28 11:54:24 np0005539065 systemd[1]: Starting Cloud-init: Config Stage...
Nov 28 11:54:24 np0005539065 systemd[1]: Starting Crash recovery kernel arming...
Nov 28 11:54:24 np0005539065 systemd[1]: Starting Notify NFS peers of a restart...
Nov 28 11:54:24 np0005539065 systemd[1]: Starting System Logging Service...
Nov 28 11:54:24 np0005539065 systemd[1]: Starting OpenSSH server daemon...
Nov 28 11:54:24 np0005539065 systemd[1]: Starting Permit User Sessions...
Nov 28 11:54:24 np0005539065 systemd[1]: Started Notify NFS peers of a restart.
Nov 28 11:54:24 np0005539065 systemd[1]: Started OpenSSH server daemon.
Nov 28 11:54:24 np0005539065 systemd[1]: Finished Permit User Sessions.
Nov 28 11:54:24 np0005539065 systemd[1]: Started Command Scheduler.
Nov 28 11:54:24 np0005539065 systemd[1]: Started Getty on tty1.
Nov 28 11:54:24 np0005539065 systemd[1]: Started Serial Getty on ttyS0.
Nov 28 11:54:24 np0005539065 systemd[1]: Reached target Login Prompts.
Nov 28 11:54:24 np0005539065 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Nov 28 11:54:24 np0005539065 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 28 11:54:24 np0005539065 systemd[1]: Started System Logging Service.
Nov 28 11:54:24 np0005539065 systemd[1]: Reached target Multi-User System.
Nov 28 11:54:24 np0005539065 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 28 11:54:24 np0005539065 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 28 11:54:24 np0005539065 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 28 11:54:24 np0005539065 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 11:54:24 np0005539065 kdumpctl[1022]: kdump: No kdump initial ramdisk found.
Nov 28 11:54:24 np0005539065 kdumpctl[1022]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 28 11:54:24 np0005539065 cloud-init[1151]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Fri, 28 Nov 2025 16:54:24 +0000. Up 10.45 seconds.
Nov 28 11:54:24 np0005539065 systemd[1]: Finished Cloud-init: Config Stage.
Nov 28 11:54:24 np0005539065 systemd[1]: Starting Cloud-init: Final Stage...
Nov 28 11:54:25 np0005539065 dracut[1284]: dracut-057-102.git20250818.el9
Nov 28 11:54:25 np0005539065 cloud-init[1302]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Fri, 28 Nov 2025 16:54:25 +0000. Up 10.85 seconds.
Nov 28 11:54:25 np0005539065 cloud-init[1311]: #############################################################
Nov 28 11:54:25 np0005539065 cloud-init[1315]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 28 11:54:25 np0005539065 dracut[1286]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 28 11:54:25 np0005539065 cloud-init[1322]: 256 SHA256:3rpJ+umSPsoKoD1HiCwbR4hZ4wQuUBSmlWgee9zaMVE root@np0005539065.novalocal (ECDSA)
Nov 28 11:54:25 np0005539065 cloud-init[1327]: 256 SHA256:vzV9eTpNFl6+xnpo+XqG0Y2a7mm/JNOnA87fkOKAfD4 root@np0005539065.novalocal (ED25519)
Nov 28 11:54:25 np0005539065 cloud-init[1332]: 3072 SHA256:xMER2QvG1eYUW5viavH/OexTwCY4q76RPE9bFg9wjqI root@np0005539065.novalocal (RSA)
Nov 28 11:54:25 np0005539065 cloud-init[1334]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 28 11:54:25 np0005539065 cloud-init[1336]: #############################################################
Nov 28 11:54:25 np0005539065 cloud-init[1302]: Cloud-init v. 24.4-7.el9 finished at Fri, 28 Nov 2025 16:54:25 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.01 seconds
Nov 28 11:54:25 np0005539065 systemd[1]: Finished Cloud-init: Final Stage.
Nov 28 11:54:25 np0005539065 systemd[1]: Reached target Cloud-init target.
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 28 11:54:25 np0005539065 dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: memstrack is not available
Nov 28 11:54:26 np0005539065 dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 28 11:54:26 np0005539065 dracut[1286]: memstrack is not available
Nov 28 11:54:26 np0005539065 dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 28 11:54:26 np0005539065 dracut[1286]: *** Including module: systemd ***
Nov 28 11:54:27 np0005539065 dracut[1286]: *** Including module: fips ***
Nov 28 11:54:27 np0005539065 dracut[1286]: *** Including module: systemd-initrd ***
Nov 28 11:54:27 np0005539065 dracut[1286]: *** Including module: i18n ***
Nov 28 11:54:27 np0005539065 dracut[1286]: *** Including module: drm ***
Nov 28 11:54:27 np0005539065 dracut[1286]: *** Including module: prefixdevname ***
Nov 28 11:54:27 np0005539065 dracut[1286]: *** Including module: kernel-modules ***
Nov 28 11:54:28 np0005539065 kernel: block vda: the capability attribute has been deprecated.
Nov 28 11:54:28 np0005539065 chronyd[793]: Selected source 23.128.92.19 (2.centos.pool.ntp.org)
Nov 28 11:54:28 np0005539065 chronyd[793]: System clock TAI offset set to 37 seconds
Nov 28 11:54:28 np0005539065 dracut[1286]: *** Including module: kernel-modules-extra ***
Nov 28 11:54:28 np0005539065 dracut[1286]: *** Including module: qemu ***
Nov 28 11:54:28 np0005539065 dracut[1286]: *** Including module: fstab-sys ***
Nov 28 11:54:28 np0005539065 dracut[1286]: *** Including module: rootfs-block ***
Nov 28 11:54:28 np0005539065 dracut[1286]: *** Including module: terminfo ***
Nov 28 11:54:28 np0005539065 dracut[1286]: *** Including module: udev-rules ***
Nov 28 11:54:29 np0005539065 dracut[1286]: Skipping udev rule: 91-permissions.rules
Nov 28 11:54:29 np0005539065 dracut[1286]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 28 11:54:29 np0005539065 dracut[1286]: *** Including module: virtiofs ***
Nov 28 11:54:29 np0005539065 dracut[1286]: *** Including module: dracut-systemd ***
Nov 28 11:54:29 np0005539065 chronyd[793]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 28 11:54:29 np0005539065 dracut[1286]: *** Including module: usrmount ***
Nov 28 11:54:29 np0005539065 dracut[1286]: *** Including module: base ***
Nov 28 11:54:29 np0005539065 dracut[1286]: *** Including module: fs-lib ***
Nov 28 11:54:29 np0005539065 dracut[1286]: *** Including module: kdumpbase ***
Nov 28 11:54:30 np0005539065 dracut[1286]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 28 11:54:30 np0005539065 dracut[1286]:  microcode_ctl module: mangling fw_dir
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: configuration "intel" is ignored
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 28 11:54:30 np0005539065 irqbalance[780]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 28 11:54:30 np0005539065 irqbalance[780]: IRQ 25 affinity is now unmanaged
Nov 28 11:54:30 np0005539065 irqbalance[780]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 28 11:54:30 np0005539065 irqbalance[780]: IRQ 31 affinity is now unmanaged
Nov 28 11:54:30 np0005539065 irqbalance[780]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 28 11:54:30 np0005539065 irqbalance[780]: IRQ 28 affinity is now unmanaged
Nov 28 11:54:30 np0005539065 irqbalance[780]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 28 11:54:30 np0005539065 irqbalance[780]: IRQ 32 affinity is now unmanaged
Nov 28 11:54:30 np0005539065 irqbalance[780]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 28 11:54:30 np0005539065 irqbalance[780]: IRQ 30 affinity is now unmanaged
Nov 28 11:54:30 np0005539065 irqbalance[780]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 28 11:54:30 np0005539065 irqbalance[780]: IRQ 29 affinity is now unmanaged
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 28 11:54:30 np0005539065 dracut[1286]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 28 11:54:30 np0005539065 dracut[1286]: *** Including module: openssl ***
Nov 28 11:54:30 np0005539065 dracut[1286]: *** Including module: shutdown ***
Nov 28 11:54:30 np0005539065 dracut[1286]: *** Including module: squash ***
Nov 28 11:54:30 np0005539065 dracut[1286]: *** Including modules done ***
Nov 28 11:54:30 np0005539065 dracut[1286]: *** Installing kernel module dependencies ***
Nov 28 11:54:31 np0005539065 dracut[1286]: *** Installing kernel module dependencies done ***
Nov 28 11:54:31 np0005539065 dracut[1286]: *** Resolving executable dependencies ***
Nov 28 11:54:32 np0005539065 dracut[1286]: *** Resolving executable dependencies done ***
Nov 28 11:54:32 np0005539065 dracut[1286]: *** Generating early-microcode cpio image ***
Nov 28 11:54:33 np0005539065 dracut[1286]: *** Store current command line parameters ***
Nov 28 11:54:33 np0005539065 dracut[1286]: Stored kernel commandline:
Nov 28 11:54:33 np0005539065 dracut[1286]: No dracut internal kernel commandline stored in the initramfs
Nov 28 11:54:33 np0005539065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 28 11:54:33 np0005539065 dracut[1286]: *** Install squash loader ***
Nov 28 11:54:34 np0005539065 dracut[1286]: *** Squashing the files inside the initramfs ***
Nov 28 11:54:35 np0005539065 dracut[1286]: *** Squashing the files inside the initramfs done ***
Nov 28 11:54:35 np0005539065 dracut[1286]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 28 11:54:35 np0005539065 dracut[1286]: *** Hardlinking files ***
Nov 28 11:54:35 np0005539065 dracut[1286]: *** Hardlinking files done ***
Nov 28 11:54:35 np0005539065 dracut[1286]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 28 11:54:36 np0005539065 kdumpctl[1022]: kdump: kexec: loaded kdump kernel
Nov 28 11:54:36 np0005539065 kdumpctl[1022]: kdump: Starting kdump: [OK]
Nov 28 11:54:36 np0005539065 systemd[1]: Finished Crash recovery kernel arming.
Nov 28 11:54:36 np0005539065 systemd[1]: Startup finished in 1.542s (kernel) + 2.569s (initrd) + 18.052s (userspace) = 22.164s.
Nov 28 11:54:40 np0005539065 irqbalance[780]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 28 11:54:40 np0005539065 irqbalance[780]: IRQ 27 affinity is now unmanaged
Nov 28 11:54:43 np0005539065 systemd-logind[790]: New session 1 of user zuul.
Nov 28 11:54:43 np0005539065 systemd[1]: Created slice User Slice of UID 1000.
Nov 28 11:54:43 np0005539065 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 28 11:54:43 np0005539065 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 28 11:54:43 np0005539065 systemd[1]: Starting User Manager for UID 1000...
Nov 28 11:54:43 np0005539065 systemd[4300]: Queued start job for default target Main User Target.
Nov 28 11:54:43 np0005539065 systemd[4300]: Created slice User Application Slice.
Nov 28 11:54:43 np0005539065 systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 28 11:54:43 np0005539065 systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Nov 28 11:54:43 np0005539065 systemd[4300]: Reached target Paths.
Nov 28 11:54:43 np0005539065 systemd[4300]: Reached target Timers.
Nov 28 11:54:43 np0005539065 systemd[4300]: Starting D-Bus User Message Bus Socket...
Nov 28 11:54:43 np0005539065 systemd[4300]: Starting Create User's Volatile Files and Directories...
Nov 28 11:54:43 np0005539065 systemd[4300]: Listening on D-Bus User Message Bus Socket.
Nov 28 11:54:43 np0005539065 systemd[4300]: Reached target Sockets.
Nov 28 11:54:43 np0005539065 systemd[4300]: Finished Create User's Volatile Files and Directories.
Nov 28 11:54:43 np0005539065 systemd[4300]: Reached target Basic System.
Nov 28 11:54:43 np0005539065 systemd[4300]: Reached target Main User Target.
Nov 28 11:54:43 np0005539065 systemd[4300]: Startup finished in 116ms.
Nov 28 11:54:43 np0005539065 systemd[1]: Started User Manager for UID 1000.
Nov 28 11:54:43 np0005539065 systemd[1]: Started Session 1 of User zuul.
Nov 28 11:54:44 np0005539065 python3[4382]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 11:54:47 np0005539065 python3[4410]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 11:54:50 np0005539065 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 28 11:54:52 np0005539065 python3[4470]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 11:54:53 np0005539065 python3[4510]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 28 11:54:55 np0005539065 python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYSwPzVQdlbv7ramHHCNYx2stvCz2Jp6fHYfksSNj6N9WvvmS/tohxlNuQfEM8Y4pKs/w9+2KLbolLVBaqcqtmgfimVvvHegTbPP/jGigI/SxLgrlppDj+XBrG2NrEzCe8PAi/R89sA4ZBomtVf2Z7zTsKfAQ8EjC8bmUh2otjM5v66abqD1v/7Sd6tv0UyQXusyVEFTb65JzV/sDYBG/uik8fhUPNNcssAkphYjCUDvoDNkBQPTececPGCdC6WmVU+A8uuqnAF6jAx5/TW16hcXZR8Qj3h+k3fCHdcMrp1w7QeF9Ccoc4krNhfTxebyZaHOEGZAEd3aA5Z9YQhzhNQFQ60Sd5QyX7vgDcKw535ur2ZUh5zNvuHO+WyCjfpMhdRGPt4NlwZ0qWbTsWWAdfrtlaqQTWhnuQF0PRzQ0ZAIVaYdqe0+WHgvkvRJSIuQnmBV8Ub3yhLnmb9jCq97+8uHreEmnjUPkh/dE96IfljIyWLtp14t+sCnHy98qgiOE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:54:55 np0005539065 python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:54:56 np0005539065 python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 11:54:56 np0005539065 python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764348895.9139838-207-37062708510419/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=37c73b774a4649e69ee660c8a26d2164_id_rsa follow=False checksum=64f16f5be041482c543c7e17279f9ddbea9a055d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:54:57 np0005539065 python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 11:54:57 np0005539065 python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764348896.7649899-240-197190119010736/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=37c73b774a4649e69ee660c8a26d2164_id_rsa.pub follow=False checksum=e04f8d0816a67b3ce28a14bd91b50015c46cf44f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:54:58 np0005539065 python3[4972]: ansible-ping Invoked with data=pong
Nov 28 11:54:59 np0005539065 python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 11:55:01 np0005539065 python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 28 11:55:02 np0005539065 python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:02 np0005539065 python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:02 np0005539065 python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:02 np0005539065 python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:02 np0005539065 python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:03 np0005539065 python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:04 np0005539065 python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:05 np0005539065 python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 11:55:05 np0005539065 python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764348904.789487-21-49679440805887/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:06 np0005539065 python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:06 np0005539065 python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:06 np0005539065 python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:07 np0005539065 python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:07 np0005539065 python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:07 np0005539065 python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:07 np0005539065 python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:08 np0005539065 python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:08 np0005539065 python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:09 np0005539065 python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:09 np0005539065 python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:09 np0005539065 python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:10 np0005539065 python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:10 np0005539065 python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:10 np0005539065 irqbalance[780]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 28 11:55:10 np0005539065 irqbalance[780]: IRQ 26 affinity is now unmanaged
Nov 28 11:55:10 np0005539065 python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:10 np0005539065 python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:11 np0005539065 python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:11 np0005539065 python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:11 np0005539065 python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:11 np0005539065 python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:12 np0005539065 python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:12 np0005539065 python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:12 np0005539065 python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:12 np0005539065 python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:13 np0005539065 python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:13 np0005539065 python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 11:55:16 np0005539065 python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 28 11:55:16 np0005539065 systemd[1]: Starting Time & Date Service...
Nov 28 11:55:16 np0005539065 systemd[1]: Started Time & Date Service.
Nov 28 11:55:16 np0005539065 systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Nov 28 11:55:16 np0005539065 python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:17 np0005539065 python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 11:55:17 np0005539065 python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764348916.884548-153-120661803942456/source _original_basename=tmpt2j9t9_y follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:17 np0005539065 python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 11:55:18 np0005539065 python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764348917.7323477-183-255123533941820/source _original_basename=tmp2magfz25 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:19 np0005539065 python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 11:55:19 np0005539065 python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764348918.795133-231-183213480946877/source _original_basename=tmp3yeobedd follow=False checksum=5c3ab9a5f55ea05adfaf0cbb34eaa95f5bcd535a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:19 np0005539065 python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 11:55:20 np0005539065 python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 11:55:20 np0005539065 python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 11:55:21 np0005539065 python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764348920.4060767-273-121736135985321/source _original_basename=tmpq032_xol follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:21 np0005539065 python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-5266-001b-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 11:55:22 np0005539065 python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-5266-001b-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 28 11:55:23 np0005539065 python3[6915]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:42 np0005539065 python3[6941]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:55:46 np0005539065 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 28 11:56:20 np0005539065 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 28 11:56:20 np0005539065 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 28 11:56:20 np0005539065 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 28 11:56:20 np0005539065 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 28 11:56:20 np0005539065 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 28 11:56:20 np0005539065 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 28 11:56:20 np0005539065 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 28 11:56:20 np0005539065 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 28 11:56:20 np0005539065 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 28 11:56:20 np0005539065 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.7942] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 28 11:56:20 np0005539065 systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.8111] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.8139] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.8143] device (eth1): carrier: link connected
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.8144] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.8149] policy: auto-activating connection 'Wired connection 1' (fece0453-87ca-3af0-bdf9-bbcfdc8b0a82)
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.8153] device (eth1): Activation: starting connection 'Wired connection 1' (fece0453-87ca-3af0-bdf9-bbcfdc8b0a82)
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.8154] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.8156] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.8160] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 11:56:20 np0005539065 NetworkManager[857]: <info>  [1764348980.8164] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 28 11:56:21 np0005539065 python3[6971]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-4f14-e6a3-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 11:56:28 np0005539065 python3[7051]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 11:56:28 np0005539065 python3[7124]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764348988.302507-102-91111445855410/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=3162f4141b0f8ee7d238c8d0b985d94b09e5e29a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:56:29 np0005539065 python3[7174]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 11:56:29 np0005539065 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 28 11:56:29 np0005539065 systemd[1]: Stopped Network Manager Wait Online.
Nov 28 11:56:29 np0005539065 systemd[1]: Stopping Network Manager Wait Online...
Nov 28 11:56:29 np0005539065 NetworkManager[857]: <info>  [1764348989.8117] caught SIGTERM, shutting down normally.
Nov 28 11:56:29 np0005539065 systemd[1]: Stopping Network Manager...
Nov 28 11:56:29 np0005539065 NetworkManager[857]: <info>  [1764348989.8128] dhcp4 (eth0): canceled DHCP transaction
Nov 28 11:56:29 np0005539065 NetworkManager[857]: <info>  [1764348989.8129] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 28 11:56:29 np0005539065 NetworkManager[857]: <info>  [1764348989.8129] dhcp4 (eth0): state changed no lease
Nov 28 11:56:29 np0005539065 NetworkManager[857]: <info>  [1764348989.8132] manager: NetworkManager state is now CONNECTING
Nov 28 11:56:29 np0005539065 NetworkManager[857]: <info>  [1764348989.8231] dhcp4 (eth1): canceled DHCP transaction
Nov 28 11:56:29 np0005539065 NetworkManager[857]: <info>  [1764348989.8232] dhcp4 (eth1): state changed no lease
Nov 28 11:56:29 np0005539065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 28 11:56:29 np0005539065 NetworkManager[857]: <info>  [1764348989.8303] exiting (success)
Nov 28 11:56:29 np0005539065 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 28 11:56:29 np0005539065 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 28 11:56:29 np0005539065 systemd[1]: Stopped Network Manager.
Nov 28 11:56:29 np0005539065 systemd[1]: Starting Network Manager...
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.8711] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:689ffb1a-47b1-4ef9-97d1-e98882930650)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.8713] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.8772] manager[0x55b25e33e070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 28 11:56:29 np0005539065 systemd[1]: Starting Hostname Service...
Nov 28 11:56:29 np0005539065 systemd[1]: Started Hostname Service.
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9568] hostname: hostname: using hostnamed
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9568] hostname: static hostname changed from (none) to "np0005539065.novalocal"
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9572] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9576] manager[0x55b25e33e070]: rfkill: Wi-Fi hardware radio set enabled
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9576] manager[0x55b25e33e070]: rfkill: WWAN hardware radio set enabled
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9598] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9598] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9599] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9599] manager: Networking is enabled by state file
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9601] settings: Loaded settings plugin: keyfile (internal)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9605] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9625] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9632] dhcp: init: Using DHCP client 'internal'
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9634] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9638] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9643] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9649] device (lo): Activation: starting connection 'lo' (ebd0c5b7-fd31-4dc9-bad3-b5977a867d53)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9655] device (eth0): carrier: link connected
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9659] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9662] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9663] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9667] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9672] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9676] device (eth1): carrier: link connected
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9679] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9682] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (fece0453-87ca-3af0-bdf9-bbcfdc8b0a82) (indicated)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9683] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9687] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9692] device (eth1): Activation: starting connection 'Wired connection 1' (fece0453-87ca-3af0-bdf9-bbcfdc8b0a82)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9698] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 28 11:56:29 np0005539065 systemd[1]: Started Network Manager.
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9701] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9703] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9704] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9706] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9708] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9709] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9712] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9722] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9730] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9733] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9740] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9742] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9763] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9765] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9769] device (lo): Activation: successful, device activated.
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9775] dhcp4 (eth0): state changed new lease, address=38.129.56.33
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9780] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9870] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9903] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9905] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9910] manager: NetworkManager state is now CONNECTED_SITE
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9915] device (eth0): Activation: successful, device activated.
Nov 28 11:56:29 np0005539065 NetworkManager[7185]: <info>  [1764348989.9920] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 28 11:56:29 np0005539065 systemd[1]: Starting Network Manager Wait Online...
Nov 28 11:56:30 np0005539065 python3[7258]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-4f14-e6a3-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 11:56:40 np0005539065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 28 11:56:59 np0005539065 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3473] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 28 11:57:15 np0005539065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 28 11:57:15 np0005539065 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3706] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3708] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3714] device (eth1): Activation: successful, device activated.
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3720] manager: startup complete
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3723] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <warn>  [1764349035.3727] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3733] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 28 11:57:15 np0005539065 systemd[1]: Finished Network Manager Wait Online.
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3831] dhcp4 (eth1): canceled DHCP transaction
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3832] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3832] dhcp4 (eth1): state changed no lease
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3847] policy: auto-activating connection 'ci-private-network' (c5b07057-7ecd-510b-8309-3fe2cb8f2f90)
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3851] device (eth1): Activation: starting connection 'ci-private-network' (c5b07057-7ecd-510b-8309-3fe2cb8f2f90)
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3852] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3854] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3859] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3866] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3913] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3916] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 11:57:15 np0005539065 NetworkManager[7185]: <info>  [1764349035.3925] device (eth1): Activation: successful, device activated.
Nov 28 11:57:21 np0005539065 systemd[4300]: Starting Mark boot as successful...
Nov 28 11:57:21 np0005539065 systemd[4300]: Finished Mark boot as successful.
Nov 28 11:57:25 np0005539065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 28 11:57:30 np0005539065 systemd-logind[790]: Session 1 logged out. Waiting for processes to exit.
Nov 28 11:57:30 np0005539065 systemd-logind[790]: New session 3 of user zuul.
Nov 28 11:57:30 np0005539065 systemd[1]: Started Session 3 of User zuul.
Nov 28 11:57:30 np0005539065 python3[7368]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 11:57:30 np0005539065 python3[7441]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764349050.2466247-259-148244643390474/source _original_basename=tmpnk8qjjq_ follow=False checksum=c50371685604afc8de094d00913d29d846bddd90 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 11:57:33 np0005539065 systemd[1]: session-3.scope: Deactivated successfully.
Nov 28 11:57:33 np0005539065 systemd-logind[790]: Session 3 logged out. Waiting for processes to exit.
Nov 28 11:57:33 np0005539065 systemd-logind[790]: Removed session 3.
Nov 28 12:00:21 np0005539065 systemd[4300]: Created slice User Background Tasks Slice.
Nov 28 12:00:21 np0005539065 systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Nov 28 12:00:21 np0005539065 systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Nov 28 12:04:59 np0005539065 systemd-logind[790]: New session 4 of user zuul.
Nov 28 12:04:59 np0005539065 systemd[1]: Started Session 4 of User zuul.
Nov 28 12:04:59 np0005539065 python3[7521]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-f9d0-5b22-000000001cf4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:04:59 np0005539065 python3[7550]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:04:59 np0005539065 python3[7576]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:05:00 np0005539065 python3[7602]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:05:00 np0005539065 python3[7628]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:05:01 np0005539065 python3[7654]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:05:01 np0005539065 python3[7732]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 12:05:02 np0005539065 python3[7805]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764349501.6288533-496-56488396656872/source _original_basename=tmpwllwqm7k follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:05:03 np0005539065 python3[7855]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:05:03 np0005539065 systemd[1]: Reloading.
Nov 28 12:05:03 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:05:04 np0005539065 python3[7911]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 28 12:05:05 np0005539065 python3[7937]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:05:05 np0005539065 python3[7965]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:05:05 np0005539065 python3[7993]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:05:06 np0005539065 python3[8021]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:05:06 np0005539065 python3[8048]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-f9d0-5b22-000000001cfb-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:05:07 np0005539065 python3[8078]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 28 12:05:09 np0005539065 systemd-logind[790]: Session 4 logged out. Waiting for processes to exit.
Nov 28 12:05:09 np0005539065 systemd[1]: session-4.scope: Deactivated successfully.
Nov 28 12:05:09 np0005539065 systemd[1]: session-4.scope: Consumed 3.807s CPU time.
Nov 28 12:05:09 np0005539065 systemd-logind[790]: Removed session 4.
Nov 28 12:05:11 np0005539065 systemd-logind[790]: New session 5 of user zuul.
Nov 28 12:05:11 np0005539065 systemd[1]: Started Session 5 of User zuul.
Nov 28 12:05:11 np0005539065 python3[8112]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 28 12:05:25 np0005539065 kernel: SELinux:  Converting 386 SID table entries...
Nov 28 12:05:25 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 12:05:25 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 12:05:25 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 12:05:25 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 12:05:25 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 12:05:25 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 12:05:25 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 12:05:35 np0005539065 kernel: SELinux:  Converting 386 SID table entries...
Nov 28 12:05:36 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 12:05:36 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 12:05:36 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 12:05:36 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 12:05:36 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 12:05:36 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 12:05:36 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 12:05:47 np0005539065 kernel: SELinux:  Converting 386 SID table entries...
Nov 28 12:05:47 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 12:05:47 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 12:05:47 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 12:05:47 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 12:05:47 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 12:05:47 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 12:05:47 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 12:05:48 np0005539065 setsebool[8179]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 28 12:05:48 np0005539065 setsebool[8179]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 28 12:06:01 np0005539065 kernel: SELinux:  Converting 389 SID table entries...
Nov 28 12:06:01 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 12:06:01 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 12:06:01 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 12:06:01 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 12:06:01 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 12:06:01 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 12:06:01 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 12:06:18 np0005539065 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 28 12:06:18 np0005539065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 28 12:06:18 np0005539065 systemd[1]: Starting man-db-cache-update.service...
Nov 28 12:06:19 np0005539065 systemd[1]: Reloading.
Nov 28 12:06:19 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:06:19 np0005539065 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 28 12:06:24 np0005539065 python3[13134]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-58db-b144-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:06:24 np0005539065 kernel: evm: overlay not supported
Nov 28 12:06:24 np0005539065 systemd[4300]: Starting D-Bus User Message Bus...
Nov 28 12:06:24 np0005539065 dbus-broker-launch[13916]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 28 12:06:24 np0005539065 dbus-broker-launch[13916]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 28 12:06:24 np0005539065 systemd[4300]: Started D-Bus User Message Bus.
Nov 28 12:06:24 np0005539065 dbus-broker-lau[13916]: Ready
Nov 28 12:06:24 np0005539065 systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 28 12:06:24 np0005539065 systemd[4300]: Created slice Slice /user.
Nov 28 12:06:24 np0005539065 systemd[4300]: podman-13820.scope: unit configures an IP firewall, but not running as root.
Nov 28 12:06:24 np0005539065 systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Nov 28 12:06:24 np0005539065 systemd[4300]: Started podman-13820.scope.
Nov 28 12:06:25 np0005539065 systemd[4300]: Started podman-pause-19894f23.scope.
Nov 28 12:06:25 np0005539065 python3[14031]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.89:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.89:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:06:25 np0005539065 python3[14031]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 28 12:06:26 np0005539065 systemd[1]: session-5.scope: Deactivated successfully.
Nov 28 12:06:26 np0005539065 systemd[1]: session-5.scope: Consumed 1min 5.719s CPU time.
Nov 28 12:06:26 np0005539065 systemd-logind[790]: Session 5 logged out. Waiting for processes to exit.
Nov 28 12:06:26 np0005539065 systemd-logind[790]: Removed session 5.
Nov 28 12:06:52 np0005539065 systemd-logind[790]: New session 6 of user zuul.
Nov 28 12:06:52 np0005539065 systemd[1]: Started Session 6 of User zuul.
Nov 28 12:06:52 np0005539065 python3[25386]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLrBG4uCPT2P4ZuDeyr1Dwk2HnLi+p5i3X9CCRBEo4kGZnEoeamp62zL3pCTeEvAfWxePI1fzAcpbrcizn38rG0= zuul@np0005539064.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 12:06:52 np0005539065 python3[25624]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLrBG4uCPT2P4ZuDeyr1Dwk2HnLi+p5i3X9CCRBEo4kGZnEoeamp62zL3pCTeEvAfWxePI1fzAcpbrcizn38rG0= zuul@np0005539064.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 12:06:53 np0005539065 python3[25993]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539065.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 28 12:06:54 np0005539065 python3[26333]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLrBG4uCPT2P4ZuDeyr1Dwk2HnLi+p5i3X9CCRBEo4kGZnEoeamp62zL3pCTeEvAfWxePI1fzAcpbrcizn38rG0= zuul@np0005539064.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 28 12:06:55 np0005539065 python3[26615]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 12:06:55 np0005539065 python3[26856]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764349615.0576162-135-166111960198641/source _original_basename=tmpcw5qj7x1 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:06:56 np0005539065 python3[27210]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 28 12:06:56 np0005539065 systemd[1]: Starting Hostname Service...
Nov 28 12:06:56 np0005539065 systemd[1]: Started Hostname Service.
Nov 28 12:06:56 np0005539065 systemd-hostnamed[27347]: Changed pretty hostname to 'compute-0'
Nov 28 12:06:56 np0005539065 systemd-hostnamed[27347]: Hostname set to <compute-0> (static)
Nov 28 12:06:56 np0005539065 NetworkManager[7185]: <info>  [1764349616.7306] hostname: static hostname changed from "np0005539065.novalocal" to "compute-0"
Nov 28 12:06:56 np0005539065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 28 12:06:56 np0005539065 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 28 12:06:57 np0005539065 systemd[1]: session-6.scope: Deactivated successfully.
Nov 28 12:06:57 np0005539065 systemd[1]: session-6.scope: Consumed 2.200s CPU time.
Nov 28 12:06:57 np0005539065 systemd-logind[790]: Session 6 logged out. Waiting for processes to exit.
Nov 28 12:06:57 np0005539065 systemd-logind[790]: Removed session 6.
Nov 28 12:07:06 np0005539065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 28 12:07:06 np0005539065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 28 12:07:06 np0005539065 systemd[1]: Finished man-db-cache-update.service.
Nov 28 12:07:06 np0005539065 systemd[1]: man-db-cache-update.service: Consumed 51.221s CPU time.
Nov 28 12:07:06 np0005539065 systemd[1]: run-r93b465ebc27244e4aae1e7b50d10730b.service: Deactivated successfully.
Nov 28 12:07:26 np0005539065 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 28 12:09:21 np0005539065 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 28 12:09:21 np0005539065 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 28 12:09:21 np0005539065 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 28 12:09:21 np0005539065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 28 12:12:26 np0005539065 systemd-logind[790]: New session 7 of user zuul.
Nov 28 12:12:26 np0005539065 systemd[1]: Started Session 7 of User zuul.
Nov 28 12:12:27 np0005539065 python3[30067]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:12:29 np0005539065 python3[30183]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 12:12:29 np0005539065 python3[30256]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764349948.9250634-33626-137420876386793/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:12:29 np0005539065 python3[30282]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 12:12:30 np0005539065 python3[30355]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764349948.9250634-33626-137420876386793/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:12:30 np0005539065 python3[30381]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 12:12:30 np0005539065 python3[30454]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764349948.9250634-33626-137420876386793/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:12:30 np0005539065 python3[30480]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 12:12:31 np0005539065 python3[30553]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764349948.9250634-33626-137420876386793/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:12:31 np0005539065 python3[30579]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 12:12:31 np0005539065 python3[30652]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764349948.9250634-33626-137420876386793/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:12:32 np0005539065 python3[30678]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 12:12:32 np0005539065 python3[30751]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764349948.9250634-33626-137420876386793/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:12:32 np0005539065 python3[30777]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 28 12:12:32 np0005539065 python3[30850]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764349948.9250634-33626-137420876386793/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:15:17 np0005539065 python3[30908]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:20:18 np0005539065 systemd-logind[790]: Session 7 logged out. Waiting for processes to exit.
Nov 28 12:20:18 np0005539065 systemd[1]: session-7.scope: Deactivated successfully.
Nov 28 12:20:18 np0005539065 systemd[1]: session-7.scope: Consumed 4.478s CPU time.
Nov 28 12:20:18 np0005539065 systemd-logind[790]: Removed session 7.
Nov 28 12:28:36 np0005539065 systemd-logind[790]: New session 8 of user zuul.
Nov 28 12:28:36 np0005539065 systemd[1]: Started Session 8 of User zuul.
Nov 28 12:28:37 np0005539065 python3.9[31072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:28:38 np0005539065 python3.9[31253]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:28:46 np0005539065 systemd[1]: session-8.scope: Deactivated successfully.
Nov 28 12:28:46 np0005539065 systemd[1]: session-8.scope: Consumed 7.896s CPU time.
Nov 28 12:28:46 np0005539065 systemd-logind[790]: Session 8 logged out. Waiting for processes to exit.
Nov 28 12:28:46 np0005539065 systemd-logind[790]: Removed session 8.
Nov 28 12:28:52 np0005539065 systemd-logind[790]: New session 9 of user zuul.
Nov 28 12:28:52 np0005539065 systemd[1]: Started Session 9 of User zuul.
Nov 28 12:28:53 np0005539065 python3.9[31463]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:28:53 np0005539065 systemd[1]: session-9.scope: Deactivated successfully.
Nov 28 12:28:53 np0005539065 systemd-logind[790]: Session 9 logged out. Waiting for processes to exit.
Nov 28 12:28:53 np0005539065 systemd-logind[790]: Removed session 9.
Nov 28 12:29:10 np0005539065 systemd-logind[790]: New session 10 of user zuul.
Nov 28 12:29:10 np0005539065 systemd[1]: Started Session 10 of User zuul.
Nov 28 12:29:11 np0005539065 python3.9[31646]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 28 12:29:12 np0005539065 python3.9[31820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:29:13 np0005539065 python3.9[31972]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:29:14 np0005539065 python3.9[32125]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:29:14 np0005539065 python3.9[32277]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:29:15 np0005539065 python3.9[32429]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:29:16 np0005539065 python3.9[32552]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764350954.9724536-73-242651442107309/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:29:16 np0005539065 python3.9[32704]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:29:17 np0005539065 python3.9[32860]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:29:18 np0005539065 python3.9[33012]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:29:18 np0005539065 python3.9[33162]: ansible-ansible.builtin.service_facts Invoked
Nov 28 12:29:21 np0005539065 python3.9[33415]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:29:22 np0005539065 python3.9[33565]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:29:23 np0005539065 python3.9[33719]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:29:24 np0005539065 python3.9[33877]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:29:25 np0005539065 python3.9[33961]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:30:11 np0005539065 systemd[1]: Reloading.
Nov 28 12:30:11 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:30:11 np0005539065 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 28 12:30:12 np0005539065 systemd[1]: Reloading.
Nov 28 12:30:12 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:30:12 np0005539065 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 28 12:30:12 np0005539065 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 28 12:30:12 np0005539065 systemd[1]: Reloading.
Nov 28 12:30:12 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:30:12 np0005539065 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 28 12:30:12 np0005539065 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 28 12:30:12 np0005539065 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 28 12:30:12 np0005539065 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 28 12:31:23 np0005539065 kernel: SELinux:  Converting 2718 SID table entries...
Nov 28 12:31:23 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 12:31:23 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 12:31:23 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 12:31:23 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 12:31:23 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 12:31:23 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 12:31:23 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 12:31:23 np0005539065 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 28 12:31:24 np0005539065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 28 12:31:24 np0005539065 systemd[1]: Starting man-db-cache-update.service...
Nov 28 12:31:24 np0005539065 systemd[1]: Reloading.
Nov 28 12:31:24 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:31:24 np0005539065 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 28 12:31:25 np0005539065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 28 12:31:25 np0005539065 systemd[1]: Finished man-db-cache-update.service.
Nov 28 12:31:25 np0005539065 systemd[1]: man-db-cache-update.service: Consumed 1.181s CPU time.
Nov 28 12:31:25 np0005539065 systemd[1]: run-rf70c292371804fdb90ce7cc5f43dd599.service: Deactivated successfully.
Nov 28 12:31:25 np0005539065 python3.9[35493]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:31:27 np0005539065 python3.9[35775]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 28 12:31:28 np0005539065 python3.9[35927]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 28 12:31:33 np0005539065 python3.9[36080]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:31:38 np0005539065 python3.9[36232]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 28 12:31:39 np0005539065 python3.9[36384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:31:39 np0005539065 python3.9[36536]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:31:40 np0005539065 python3.9[36659]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351099.3689377-236-30003869350526/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6db368e8eab9994da74d1f7f8980fe4061371735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:31:41 np0005539065 python3.9[36811]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:31:41 np0005539065 python3.9[36963]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:31:42 np0005539065 python3.9[37116]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:31:43 np0005539065 python3.9[37268]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 28 12:31:43 np0005539065 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 12:31:43 np0005539065 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 12:31:44 np0005539065 python3.9[37422]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 28 12:31:45 np0005539065 python3.9[37580]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 28 12:31:46 np0005539065 python3.9[37740]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 28 12:31:46 np0005539065 python3.9[37893]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 28 12:31:47 np0005539065 python3.9[38051]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 28 12:31:48 np0005539065 python3.9[38203]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:31:51 np0005539065 python3.9[38356]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:31:51 np0005539065 python3.9[38508]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:31:52 np0005539065 python3.9[38631]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351111.4349227-355-132802219860417/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:31:53 np0005539065 python3.9[38783]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:31:53 np0005539065 systemd[1]: Starting Load Kernel Modules...
Nov 28 12:31:53 np0005539065 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 28 12:31:53 np0005539065 kernel: Bridge firewalling registered
Nov 28 12:31:53 np0005539065 systemd-modules-load[38787]: Inserted module 'br_netfilter'
Nov 28 12:31:53 np0005539065 systemd[1]: Finished Load Kernel Modules.
Nov 28 12:31:54 np0005539065 python3.9[38944]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:31:54 np0005539065 python3.9[39067]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351113.640615-378-91665456455800/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:31:55 np0005539065 python3.9[39219]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:31:58 np0005539065 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 28 12:31:58 np0005539065 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 28 12:31:58 np0005539065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 28 12:31:58 np0005539065 systemd[1]: Starting man-db-cache-update.service...
Nov 28 12:31:58 np0005539065 systemd[1]: Reloading.
Nov 28 12:31:58 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:31:58 np0005539065 systemd[1]: Starting dnf makecache...
Nov 28 12:31:58 np0005539065 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 28 12:31:59 np0005539065 dnf[39294]: Failed determining last makecache time.
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-openstack-barbican-42b4c41831408a8e323  98 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 161 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-openstack-cinder-1c00d6490d88e436f26ef 179 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-python-stevedore-c4acc5639fd2329372142 182 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-python-cloudkitty-tests-tempest-2c80f8 142 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-os-net-config-9758ab42364673d01bc5014e 158 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 169 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-python-designate-tests-tempest-347fdbc 153 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-openstack-glance-1fd12c29b339f30fe823e 178 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 192 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-openstack-manila-3c01b7181572c95dac462 188 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-python-whitebox-neutron-tests-tempest- 150 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-openstack-octavia-ba397f07a7331190208c 184 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-openstack-watcher-c014f81a8647287f6dcc 176 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-python-tcib-1124124ec06aadbac34f0d340b 126 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 185 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-openstack-swift-dc98a8463506ac520c469a 193 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-python-tempestconf-8515371b7cceebd4282 182 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: delorean-openstack-heat-ui-013accbfd179753bc3f0 159 kB/s | 3.0 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: CentOS Stream 9 - BaseOS                         82 kB/s | 7.3 kB     00:00
Nov 28 12:31:59 np0005539065 dnf[39294]: CentOS Stream 9 - AppStream                      50 kB/s | 7.4 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: CentOS Stream 9 - CRB                            77 kB/s | 7.2 kB     00:00
Nov 28 12:32:00 np0005539065 python3.9[40716]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:32:00 np0005539065 dnf[39294]: CentOS Stream 9 - Extras packages                75 kB/s | 8.3 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: dlrn-antelope-testing                           168 kB/s | 3.0 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: dlrn-antelope-build-deps                        157 kB/s | 3.0 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: centos9-rabbitmq                                 94 kB/s | 3.0 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: centos9-storage                                 148 kB/s | 3.0 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: centos9-opstools                                136 kB/s | 3.0 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: NFV SIG OpenvSwitch                             122 kB/s | 3.0 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: repo-setup-centos-appstream                     162 kB/s | 4.4 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: repo-setup-centos-baseos                        166 kB/s | 3.9 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: repo-setup-centos-highavailability              160 kB/s | 3.9 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: repo-setup-centos-powertools                    103 kB/s | 4.3 kB     00:00
Nov 28 12:32:00 np0005539065 dnf[39294]: Extra Packages for Enterprise Linux 9 - x86_64  231 kB/s |  34 kB     00:00
Nov 28 12:32:00 np0005539065 python3.9[41722]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 28 12:32:01 np0005539065 dnf[39294]: Metadata cache created.
Nov 28 12:32:01 np0005539065 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 28 12:32:01 np0005539065 systemd[1]: Finished dnf makecache.
Nov 28 12:32:01 np0005539065 systemd[1]: dnf-makecache.service: Consumed 1.740s CPU time.
Nov 28 12:32:01 np0005539065 python3.9[42487]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:32:02 np0005539065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 28 12:32:02 np0005539065 systemd[1]: Finished man-db-cache-update.service.
Nov 28 12:32:02 np0005539065 systemd[1]: man-db-cache-update.service: Consumed 4.404s CPU time.
Nov 28 12:32:02 np0005539065 systemd[1]: run-r9324551c8614445eb9b842c041f748b4.service: Deactivated successfully.
Nov 28 12:32:02 np0005539065 python3.9[43423]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:32:02 np0005539065 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 28 12:32:02 np0005539065 systemd[1]: Starting Authorization Manager...
Nov 28 12:32:02 np0005539065 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 28 12:32:02 np0005539065 polkitd[43641]: Started polkitd version 0.117
Nov 28 12:32:02 np0005539065 systemd[1]: Started Authorization Manager.
Nov 28 12:32:03 np0005539065 python3.9[43811]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:32:03 np0005539065 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 28 12:32:04 np0005539065 systemd[1]: tuned.service: Deactivated successfully.
Nov 28 12:32:04 np0005539065 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 28 12:32:04 np0005539065 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 28 12:32:04 np0005539065 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 28 12:32:04 np0005539065 python3.9[43973]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 28 12:32:06 np0005539065 python3.9[44125]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:32:07 np0005539065 systemd[1]: Reloading.
Nov 28 12:32:07 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:32:07 np0005539065 python3.9[44313]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:32:07 np0005539065 systemd[1]: Reloading.
Nov 28 12:32:07 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:32:08 np0005539065 python3.9[44502]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:32:09 np0005539065 python3.9[44655]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:32:09 np0005539065 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 28 12:32:09 np0005539065 python3.9[44808]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:32:11 np0005539065 python3.9[44970]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:32:12 np0005539065 python3.9[45123]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:32:12 np0005539065 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 28 12:32:12 np0005539065 systemd[1]: Stopped Apply Kernel Variables.
Nov 28 12:32:12 np0005539065 systemd[1]: Stopping Apply Kernel Variables...
Nov 28 12:32:12 np0005539065 systemd[1]: Starting Apply Kernel Variables...
Nov 28 12:32:12 np0005539065 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 28 12:32:12 np0005539065 systemd[1]: Finished Apply Kernel Variables.
Nov 28 12:32:13 np0005539065 systemd-logind[790]: Session 10 logged out. Waiting for processes to exit.
Nov 28 12:32:13 np0005539065 systemd[1]: session-10.scope: Deactivated successfully.
Nov 28 12:32:13 np0005539065 systemd[1]: session-10.scope: Consumed 2min 15.712s CPU time.
Nov 28 12:32:13 np0005539065 systemd-logind[790]: Removed session 10.
Nov 28 12:32:18 np0005539065 systemd-logind[790]: New session 11 of user zuul.
Nov 28 12:32:18 np0005539065 systemd[1]: Started Session 11 of User zuul.
Nov 28 12:32:19 np0005539065 python3.9[45306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:32:20 np0005539065 python3.9[45460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:32:22 np0005539065 python3.9[45616]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:32:22 np0005539065 python3.9[45767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:32:23 np0005539065 python3.9[45923]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:32:24 np0005539065 python3.9[46007]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:32:26 np0005539065 python3.9[46160]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:32:27 np0005539065 python3.9[46331]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:32:28 np0005539065 python3.9[46483]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:32:28 np0005539065 systemd[1]: var-lib-containers-storage-overlay-compat1806816586-merged.mount: Deactivated successfully.
Nov 28 12:32:28 np0005539065 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2452981579-merged.mount: Deactivated successfully.
Nov 28 12:32:28 np0005539065 podman[46484]: 2025-11-28 17:32:28.487389566 +0000 UTC m=+0.068547353 system refresh
Nov 28 12:32:29 np0005539065 python3.9[46646]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:32:29 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:32:29 np0005539065 python3.9[46769]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351148.654525-109-254184217257009/.source.json follow=False _original_basename=podman_network_config.j2 checksum=53bf94a25bcaa21e9998bcb3f5fffb6d157632c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:32:30 np0005539065 python3.9[46921]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:32:31 np0005539065 python3.9[47044]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351150.0754778-124-60501191086024/.source.conf follow=False _original_basename=registries.conf.j2 checksum=197bf6e1388aca01b529f5e8d08286f263a7fb81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:32:32 np0005539065 python3.9[47196]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:32:32 np0005539065 python3.9[47348]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:32:33 np0005539065 python3.9[47500]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:32:33 np0005539065 python3.9[47652]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:32:34 np0005539065 python3.9[47803]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:32:35 np0005539065 python3.9[47957]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:32:37 np0005539065 python3.9[48110]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:32:40 np0005539065 python3.9[48270]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:32:42 np0005539065 python3.9[48423]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:32:44 np0005539065 python3.9[48576]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:32:46 np0005539065 python3.9[48732]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:32:49 np0005539065 python3.9[48901]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:32:51 np0005539065 python3.9[49054]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:33:01 np0005539065 python3.9[49391]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:33:03 np0005539065 python3.9[49547]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:33:04 np0005539065 python3.9[49722]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:33:04 np0005539065 python3.9[49845]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764351183.8751338-272-254886068648486/.source.json _original_basename=.zecjgpjh follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:33:05 np0005539065 python3.9[49997]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 28 12:33:06 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:07 np0005539065 systemd[1]: var-lib-containers-storage-overlay-compat3415774429-merged.mount: Deactivated successfully.
Nov 28 12:33:08 np0005539065 systemd[1]: var-lib-containers-storage-overlay-compat3415774429-lower\x2dmapped.mount: Deactivated successfully.
Nov 28 12:33:12 np0005539065 podman[50009]: 2025-11-28 17:33:12.405331849 +0000 UTC m=+6.382986355 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 28 12:33:12 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:12 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:12 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:13 np0005539065 python3.9[50307]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 28 12:33:24 np0005539065 podman[50320]: 2025-11-28 17:33:24.581692269 +0000 UTC m=+11.225022118 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 12:33:24 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:24 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:24 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:25 np0005539065 python3.9[50615]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 28 12:33:27 np0005539065 podman[50628]: 2025-11-28 17:33:27.171752948 +0000 UTC m=+1.652814422 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 28 12:33:27 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:27 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:27 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:28 np0005539065 python3.9[50865]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 28 12:33:43 np0005539065 podman[50880]: 2025-11-28 17:33:43.773234302 +0000 UTC m=+15.604484819 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 28 12:33:43 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:43 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:43 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:44 np0005539065 python3.9[51160]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 28 12:33:44 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:59 np0005539065 podman[51173]: 2025-11-28 17:33:59.598006512 +0000 UTC m=+14.802121860 image pull e473677aab0cdc2c7c03a6e756cd02c6bfc4f008b09c67064c39f2682bdecd39 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 28 12:33:59 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:59 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:33:59 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:00 np0005539065 python3.9[51490]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 28 12:34:02 np0005539065 podman[51502]: 2025-11-28 17:34:02.293692897 +0000 UTC m=+1.867494847 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 28 12:34:02 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:02 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:02 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:03 np0005539065 python3.9[51776]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 28 12:34:03 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:08 np0005539065 podman[51787]: 2025-11-28 17:34:08.410796278 +0000 UTC m=+5.041515649 image pull 743c1960518ee2a8df257b87dd40a31faa57a99c6d0aa394baae4cd418c3c2b2 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 28 12:34:08 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:08 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:08 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:09 np0005539065 python3.9[52041]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 28 12:34:21 np0005539065 podman[52053]: 2025-11-28 17:34:21.301307884 +0000 UTC m=+12.049235269 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 28 12:34:21 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:21 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:21 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:34:22 np0005539065 systemd[1]: session-11.scope: Deactivated successfully.
Nov 28 12:34:22 np0005539065 systemd[1]: session-11.scope: Consumed 2min 32.174s CPU time.
Nov 28 12:34:22 np0005539065 systemd-logind[790]: Session 11 logged out. Waiting for processes to exit.
Nov 28 12:34:22 np0005539065 systemd-logind[790]: Removed session 11.
Nov 28 12:34:27 np0005539065 systemd-logind[790]: New session 12 of user zuul.
Nov 28 12:34:27 np0005539065 systemd[1]: Started Session 12 of User zuul.
Nov 28 12:34:28 np0005539065 python3.9[52457]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:34:29 np0005539065 python3.9[52613]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 28 12:34:30 np0005539065 python3.9[52766]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 28 12:34:31 np0005539065 python3.9[52924]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 28 12:34:32 np0005539065 python3.9[53084]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:34:33 np0005539065 python3.9[53168]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:34:35 np0005539065 python3.9[53330]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:34:51 np0005539065 kernel: SELinux:  Converting 2732 SID table entries...
Nov 28 12:34:51 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 12:34:51 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 12:34:51 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 12:34:51 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 12:34:51 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 12:34:51 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 12:34:51 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 12:34:52 np0005539065 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 28 12:34:52 np0005539065 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 28 12:34:53 np0005539065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 28 12:34:53 np0005539065 systemd[1]: Starting man-db-cache-update.service...
Nov 28 12:34:53 np0005539065 systemd[1]: Reloading.
Nov 28 12:34:53 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:34:53 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:34:53 np0005539065 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 28 12:34:54 np0005539065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 28 12:34:54 np0005539065 systemd[1]: Finished man-db-cache-update.service.
Nov 28 12:34:54 np0005539065 systemd[1]: run-r5f2b17cec271416c8b318816683c4455.service: Deactivated successfully.
Nov 28 12:34:55 np0005539065 python3.9[54428]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 28 12:34:55 np0005539065 systemd[1]: Reloading.
Nov 28 12:34:55 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:34:55 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:34:55 np0005539065 systemd[1]: Starting Open vSwitch Database Unit...
Nov 28 12:34:55 np0005539065 chown[54470]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 28 12:34:56 np0005539065 ovs-ctl[54475]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 28 12:34:56 np0005539065 ovs-ctl[54475]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 28 12:34:56 np0005539065 ovs-ctl[54475]: Starting ovsdb-server [  OK  ]
Nov 28 12:34:56 np0005539065 ovs-vsctl[54524]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 28 12:34:56 np0005539065 ovs-vsctl[54544]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d60b742f-7e94-4137-b50a-cfc8eac54167\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 28 12:34:56 np0005539065 ovs-ctl[54475]: Configuring Open vSwitch system IDs [  OK  ]
Nov 28 12:34:56 np0005539065 ovs-vsctl[54550]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 28 12:34:56 np0005539065 ovs-ctl[54475]: Enabling remote OVSDB managers [  OK  ]
Nov 28 12:34:56 np0005539065 systemd[1]: Started Open vSwitch Database Unit.
Nov 28 12:34:56 np0005539065 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 28 12:34:56 np0005539065 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 28 12:34:56 np0005539065 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 28 12:34:56 np0005539065 kernel: openvswitch: Open vSwitch switching datapath
Nov 28 12:34:56 np0005539065 ovs-ctl[54595]: Inserting openvswitch module [  OK  ]
Nov 28 12:34:56 np0005539065 ovs-ctl[54564]: Starting ovs-vswitchd [  OK  ]
Nov 28 12:34:56 np0005539065 ovs-vsctl[54615]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 28 12:34:56 np0005539065 ovs-ctl[54564]: Enabling remote OVSDB managers [  OK  ]
Nov 28 12:34:56 np0005539065 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 28 12:34:56 np0005539065 systemd[1]: Starting Open vSwitch...
Nov 28 12:34:56 np0005539065 systemd[1]: Finished Open vSwitch.
Nov 28 12:34:57 np0005539065 python3.9[54767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:34:58 np0005539065 python3.9[54919]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 28 12:34:59 np0005539065 kernel: SELinux:  Converting 2746 SID table entries...
Nov 28 12:34:59 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 12:34:59 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 12:34:59 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 12:34:59 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 12:34:59 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 12:34:59 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 12:34:59 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 12:35:00 np0005539065 python3.9[55074]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:35:01 np0005539065 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 28 12:35:01 np0005539065 python3.9[55232]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:35:03 np0005539065 python3.9[55385]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:35:05 np0005539065 python3.9[55672]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 28 12:35:05 np0005539065 python3.9[55822]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:35:06 np0005539065 python3.9[55976]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:35:08 np0005539065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 28 12:35:08 np0005539065 systemd[1]: Starting man-db-cache-update.service...
Nov 28 12:35:08 np0005539065 systemd[1]: Reloading.
Nov 28 12:35:08 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:35:08 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:35:08 np0005539065 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 28 12:35:09 np0005539065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 28 12:35:09 np0005539065 systemd[1]: Finished man-db-cache-update.service.
Nov 28 12:35:09 np0005539065 systemd[1]: run-r917ce7f3b1294731ab1a2053723890c9.service: Deactivated successfully.
Nov 28 12:35:10 np0005539065 python3.9[56293]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:35:10 np0005539065 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 28 12:35:10 np0005539065 systemd[1]: Stopped Network Manager Wait Online.
Nov 28 12:35:10 np0005539065 systemd[1]: Stopping Network Manager Wait Online...
Nov 28 12:35:10 np0005539065 systemd[1]: Stopping Network Manager...
Nov 28 12:35:10 np0005539065 NetworkManager[7185]: <info>  [1764351310.3293] caught SIGTERM, shutting down normally.
Nov 28 12:35:10 np0005539065 NetworkManager[7185]: <info>  [1764351310.3312] dhcp4 (eth0): canceled DHCP transaction
Nov 28 12:35:10 np0005539065 NetworkManager[7185]: <info>  [1764351310.3312] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 28 12:35:10 np0005539065 NetworkManager[7185]: <info>  [1764351310.3312] dhcp4 (eth0): state changed no lease
Nov 28 12:35:10 np0005539065 NetworkManager[7185]: <info>  [1764351310.3316] manager: NetworkManager state is now CONNECTED_SITE
Nov 28 12:35:10 np0005539065 NetworkManager[7185]: <info>  [1764351310.3396] exiting (success)
Nov 28 12:35:10 np0005539065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 28 12:35:10 np0005539065 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 28 12:35:10 np0005539065 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 28 12:35:10 np0005539065 systemd[1]: Stopped Network Manager.
Nov 28 12:35:10 np0005539065 systemd[1]: NetworkManager.service: Consumed 13.846s CPU time, 4.1M memory peak, read 0B from disk, written 32.5K to disk.
Nov 28 12:35:10 np0005539065 systemd[1]: Starting Network Manager...
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.4373] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:689ffb1a-47b1-4ef9-97d1-e98882930650)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.4377] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.4466] manager[0x55be42136090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 28 12:35:10 np0005539065 systemd[1]: Starting Hostname Service...
Nov 28 12:35:10 np0005539065 systemd[1]: Started Hostname Service.
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5262] hostname: hostname: using hostnamed
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5262] hostname: static hostname changed from (none) to "compute-0"
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5269] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5275] manager[0x55be42136090]: rfkill: Wi-Fi hardware radio set enabled
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5275] manager[0x55be42136090]: rfkill: WWAN hardware radio set enabled
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5298] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5307] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5308] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5308] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5309] manager: Networking is enabled by state file
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5310] settings: Loaded settings plugin: keyfile (internal)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5314] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5337] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5347] dhcp: init: Using DHCP client 'internal'
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5349] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5354] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5359] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5365] device (lo): Activation: starting connection 'lo' (ebd0c5b7-fd31-4dc9-bad3-b5977a867d53)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5371] device (eth0): carrier: link connected
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5374] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5377] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5378] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5382] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5388] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5392] device (eth1): carrier: link connected
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5395] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5399] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (c5b07057-7ecd-510b-8309-3fe2cb8f2f90) (indicated)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5399] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5403] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5408] device (eth1): Activation: starting connection 'ci-private-network' (c5b07057-7ecd-510b-8309-3fe2cb8f2f90)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5413] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 28 12:35:10 np0005539065 systemd[1]: Started Network Manager.
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5419] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5422] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5423] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5424] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5426] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5427] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5429] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5433] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5437] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5439] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5447] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5459] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5476] dhcp4 (eth0): state changed new lease, address=38.129.56.33
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5481] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 28 12:35:10 np0005539065 systemd[1]: Starting Network Manager Wait Online...
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5563] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5574] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5576] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5579] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5587] device (lo): Activation: successful, device activated.
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5595] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5600] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5605] device (eth1): Activation: successful, device activated.
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5617] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5619] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5623] manager: NetworkManager state is now CONNECTED_SITE
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5628] device (eth0): Activation: successful, device activated.
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5633] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 28 12:35:10 np0005539065 NetworkManager[56307]: <info>  [1764351310.5636] manager: startup complete
Nov 28 12:35:10 np0005539065 systemd[1]: Finished Network Manager Wait Online.
Nov 28 12:35:11 np0005539065 python3.9[56519]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:35:16 np0005539065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 28 12:35:16 np0005539065 systemd[1]: Starting man-db-cache-update.service...
Nov 28 12:35:16 np0005539065 systemd[1]: Reloading.
Nov 28 12:35:16 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:35:16 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:35:17 np0005539065 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 28 12:35:18 np0005539065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 28 12:35:18 np0005539065 systemd[1]: Finished man-db-cache-update.service.
Nov 28 12:35:18 np0005539065 systemd[1]: run-r2c29c3aeb98a40089a7b2448a25cc818.service: Deactivated successfully.
Nov 28 12:35:19 np0005539065 python3.9[56978]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:35:20 np0005539065 python3.9[57130]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:35:20 np0005539065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 28 12:35:20 np0005539065 python3.9[57284]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:35:21 np0005539065 python3.9[57436]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:35:22 np0005539065 python3.9[57588]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:35:22 np0005539065 python3.9[57740]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:35:23 np0005539065 python3.9[57892]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:35:24 np0005539065 python3.9[58015]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351322.8778002-229-14386379461119/.source _original_basename=.jgc5e3ji follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:35:25 np0005539065 python3.9[58167]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:35:25 np0005539065 python3.9[58319]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 28 12:35:26 np0005539065 python3.9[58471]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:35:28 np0005539065 python3.9[58898]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 28 12:35:29 np0005539065 ansible-async_wrapper.py[59073]: Invoked with j457594361676 300 /home/zuul/.ansible/tmp/ansible-tmp-1764351328.8373742-295-102384331770410/AnsiballZ_edpm_os_net_config.py _
Nov 28 12:35:29 np0005539065 ansible-async_wrapper.py[59076]: Starting module and watcher
Nov 28 12:35:29 np0005539065 ansible-async_wrapper.py[59076]: Start watching 59077 (300)
Nov 28 12:35:29 np0005539065 ansible-async_wrapper.py[59077]: Start module (59077)
Nov 28 12:35:29 np0005539065 ansible-async_wrapper.py[59073]: Return async_wrapper task started.
Nov 28 12:35:29 np0005539065 python3.9[59078]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 28 12:35:30 np0005539065 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 28 12:35:30 np0005539065 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 28 12:35:30 np0005539065 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 28 12:35:30 np0005539065 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 28 12:35:30 np0005539065 kernel: cfg80211: failed to load regulatory.db
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.7448] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.7474] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.7971] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.7974] audit: op="connection-add" uuid="180f4542-43d0-47c7-8659-1f53ac78200b" name="br-ex-br" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.7989] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.7991] audit: op="connection-add" uuid="f294e0ae-502e-41bc-8320-6d974d767ce4" name="br-ex-port" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8002] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8003] audit: op="connection-add" uuid="08d990bf-d962-4431-91d4-500176c6419a" name="eth1-port" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8014] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8016] audit: op="connection-add" uuid="8f63dc64-715e-4544-a52e-3eeaa06b1db2" name="vlan20-port" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8026] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8028] audit: op="connection-add" uuid="1f8f387d-eba4-4a1d-a20f-7cdf45528c5a" name="vlan21-port" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8038] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8040] audit: op="connection-add" uuid="9098c8b2-b521-486b-87e5-a0719e1f5f1c" name="vlan22-port" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8059] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8073] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8075] audit: op="connection-add" uuid="9dbc89ed-a9fc-4a6c-bfda-b2ba800d6b86" name="br-ex-if" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8139] audit: op="connection-update" uuid="c5b07057-7ecd-510b-8309-3fe2cb8f2f90" name="ci-private-network" args="ovs-external-ids.data,ipv4.routing-rules,ipv4.method,ipv4.addresses,ipv4.routes,ipv4.dns,ipv4.never-default,ovs-interface.type,connection.controller,connection.master,connection.port-type,connection.slave-type,connection.timestamp,ipv6.method,ipv6.addresses,ipv6.addr-gen-mode,ipv6.routes,ipv6.dns,ipv6.routing-rules" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8153] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8155] audit: op="connection-add" uuid="7ee3ea20-88a8-4642-b171-96dc90b5f485" name="vlan20-if" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8172] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8174] audit: op="connection-add" uuid="44144a1a-a05e-4230-b457-548944d86de4" name="vlan21-if" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8190] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8192] audit: op="connection-add" uuid="86a9c2d8-8675-41d0-81b0-f0d4bbb1dc3f" name="vlan22-if" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8204] audit: op="connection-delete" uuid="fece0453-87ca-3af0-bdf9-bbcfdc8b0a82" name="Wired connection 1" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8219] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8229] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8234] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (180f4542-43d0-47c7-8659-1f53ac78200b)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8235] audit: op="connection-activate" uuid="180f4542-43d0-47c7-8659-1f53ac78200b" name="br-ex-br" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8237] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8243] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8247] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f294e0ae-502e-41bc-8320-6d974d767ce4)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8249] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8254] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8258] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (08d990bf-d962-4431-91d4-500176c6419a)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8260] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8266] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8270] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (8f63dc64-715e-4544-a52e-3eeaa06b1db2)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8273] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8278] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8282] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (1f8f387d-eba4-4a1d-a20f-7cdf45528c5a)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8284] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8290] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8294] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (9098c8b2-b521-486b-87e5-a0719e1f5f1c)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8295] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8298] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8300] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8305] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8310] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8314] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (9dbc89ed-a9fc-4a6c-bfda-b2ba800d6b86)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8315] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8318] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8321] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8322] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8324] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8335] device (eth1): disconnecting for new activation request.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8336] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8368] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8371] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8372] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8376] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8380] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8385] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (7ee3ea20-88a8-4642-b171-96dc90b5f485)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8386] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8390] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8392] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8394] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8398] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8404] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8411] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (44144a1a-a05e-4230-b457-548944d86de4)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8412] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8415] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8418] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8419] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8423] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8427] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8433] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (86a9c2d8-8675-41d0-81b0-f0d4bbb1dc3f)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8434] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8438] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8440] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8442] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8444] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8457] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8460] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8463] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8465] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8471] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8475] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8480] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8483] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8485] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8489] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 kernel: ovs-system: entered promiscuous mode
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8494] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8498] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8501] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8506] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8510] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 kernel: Timeout policy base is empty
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8514] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8515] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8519] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8523] dhcp4 (eth0): canceled DHCP transaction
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8523] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8524] dhcp4 (eth0): state changed no lease
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8525] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 28 12:35:31 np0005539065 systemd-udevd[59085]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8536] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8542] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59079 uid=0 result="fail" reason="Device is not activated"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8545] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8551] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8586] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8588] dhcp4 (eth0): state changed new lease, address=38.129.56.33
Nov 28 12:35:31 np0005539065 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8628] device (eth1): disconnecting for new activation request.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8629] audit: op="connection-activate" uuid="c5b07057-7ecd-510b-8309-3fe2cb8f2f90" name="ci-private-network" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8667] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59079 uid=0 result="success"
Nov 28 12:35:31 np0005539065 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8690] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8748] device (eth1): Activation: starting connection 'ci-private-network' (c5b07057-7ecd-510b-8309-3fe2cb8f2f90)
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8757] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8760] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8766] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8767] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8769] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8770] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8772] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8774] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8779] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8786] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8790] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8794] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8798] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8801] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8805] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8809] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8814] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8818] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8823] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8827] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8832] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8838] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8843] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8882] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8884] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 kernel: br-ex: entered promiscuous mode
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.8890] device (eth1): Activation: successful, device activated.
Nov 28 12:35:31 np0005539065 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 28 12:35:31 np0005539065 kernel: vlan22: entered promiscuous mode
Nov 28 12:35:31 np0005539065 systemd-udevd[59084]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9011] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9023] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9039] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9040] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9046] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 28 12:35:31 np0005539065 kernel: vlan20: entered promiscuous mode
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9090] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9101] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9116] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9118] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 kernel: vlan21: entered promiscuous mode
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9132] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9208] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9220] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9230] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9245] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9286] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9288] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9291] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9299] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9305] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 28 12:35:31 np0005539065 NetworkManager[56307]: <info>  [1764351331.9312] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.0361] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59079 uid=0 result="success"
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.1765] checkpoint[0x55be4210c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.1767] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59079 uid=0 result="success"
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.4613] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59079 uid=0 result="success"
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.4626] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59079 uid=0 result="success"
Nov 28 12:35:33 np0005539065 python3.9[59412]: ansible-ansible.legacy.async_status Invoked with jid=j457594361676.59073 mode=status _async_dir=/root/.ansible_async
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.6362] audit: op="networking-control" arg="global-dns-configuration" pid=59079 uid=0 result="success"
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.6402] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.6443] audit: op="networking-control" arg="global-dns-configuration" pid=59079 uid=0 result="success"
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.6471] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59079 uid=0 result="success"
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.7779] checkpoint[0x55be4210ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 28 12:35:33 np0005539065 NetworkManager[56307]: <info>  [1764351333.7788] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59079 uid=0 result="success"
Nov 28 12:35:33 np0005539065 ansible-async_wrapper.py[59077]: Module complete (59077)
Nov 28 12:35:34 np0005539065 ansible-async_wrapper.py[59076]: Done in kid B.
Nov 28 12:35:37 np0005539065 python3.9[59517]: ansible-ansible.legacy.async_status Invoked with jid=j457594361676.59073 mode=status _async_dir=/root/.ansible_async
Nov 28 12:35:37 np0005539065 python3.9[59617]: ansible-ansible.legacy.async_status Invoked with jid=j457594361676.59073 mode=cleanup _async_dir=/root/.ansible_async
Nov 28 12:35:38 np0005539065 python3.9[59769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:35:38 np0005539065 python3.9[59892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351337.7103002-322-96712999946585/.source.returncode _original_basename=.ku7n638k follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:35:39 np0005539065 python3.9[60044]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:35:40 np0005539065 python3.9[60167]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351339.0992-338-107536993931428/.source.cfg _original_basename=.lyub01kd follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:35:40 np0005539065 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 28 12:35:40 np0005539065 python3.9[60323]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:35:41 np0005539065 systemd[1]: Reloading Network Manager...
Nov 28 12:35:41 np0005539065 NetworkManager[56307]: <info>  [1764351341.0464] audit: op="reload" arg="0" pid=60327 uid=0 result="success"
Nov 28 12:35:41 np0005539065 NetworkManager[56307]: <info>  [1764351341.0470] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 28 12:35:41 np0005539065 systemd[1]: Reloaded Network Manager.
Nov 28 12:35:41 np0005539065 systemd[1]: session-12.scope: Deactivated successfully.
Nov 28 12:35:41 np0005539065 systemd[1]: session-12.scope: Consumed 52.742s CPU time.
Nov 28 12:35:41 np0005539065 systemd-logind[790]: Session 12 logged out. Waiting for processes to exit.
Nov 28 12:35:41 np0005539065 systemd-logind[790]: Removed session 12.
Nov 28 12:35:47 np0005539065 systemd-logind[790]: New session 13 of user zuul.
Nov 28 12:35:47 np0005539065 systemd[1]: Started Session 13 of User zuul.
Nov 28 12:35:48 np0005539065 python3.9[60511]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:35:49 np0005539065 python3.9[60665]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:35:50 np0005539065 python3.9[60855]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:35:51 np0005539065 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 28 12:35:51 np0005539065 systemd-logind[790]: Session 13 logged out. Waiting for processes to exit.
Nov 28 12:35:51 np0005539065 systemd[1]: session-13.scope: Deactivated successfully.
Nov 28 12:35:51 np0005539065 systemd[1]: session-13.scope: Consumed 2.217s CPU time.
Nov 28 12:35:51 np0005539065 systemd-logind[790]: Removed session 13.
Nov 28 12:35:57 np0005539065 systemd-logind[790]: New session 14 of user zuul.
Nov 28 12:35:57 np0005539065 systemd[1]: Started Session 14 of User zuul.
Nov 28 12:35:58 np0005539065 python3.9[61038]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:35:59 np0005539065 python3.9[61192]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:36:00 np0005539065 python3.9[61349]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:36:01 np0005539065 python3.9[61433]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:36:03 np0005539065 python3.9[61587]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:36:04 np0005539065 python3.9[61778]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:05 np0005539065 python3.9[61930]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:36:05 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:36:05 np0005539065 python3.9[62092]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:06 np0005539065 python3.9[62170]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:06 np0005539065 python3.9[62322]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:07 np0005539065 python3.9[62400]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:36:08 np0005539065 python3.9[62552]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:36:08 np0005539065 python3.9[62704]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:36:09 np0005539065 python3.9[62856]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:36:10 np0005539065 python3.9[63008]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:36:10 np0005539065 python3.9[63160]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:36:13 np0005539065 python3.9[63313]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:36:13 np0005539065 python3.9[63467]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:36:14 np0005539065 python3.9[63619]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:36:15 np0005539065 python3.9[63771]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:36:16 np0005539065 python3.9[63924]: ansible-service_facts Invoked
Nov 28 12:36:16 np0005539065 network[63941]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 28 12:36:16 np0005539065 network[63942]: 'network-scripts' will be removed from distribution in near future.
Nov 28 12:36:16 np0005539065 network[63943]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 28 12:36:20 np0005539065 python3.9[64395]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:36:23 np0005539065 python3.9[64548]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 28 12:36:24 np0005539065 python3.9[64700]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:25 np0005539065 python3.9[64825]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351383.7608044-232-125926372777321/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:26 np0005539065 python3.9[64979]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:26 np0005539065 python3.9[65104]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351385.5071168-247-141404659929835/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:27 np0005539065 python3.9[65259]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:28 np0005539065 python3.9[65413]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:36:29 np0005539065 python3.9[65497]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:36:30 np0005539065 python3.9[65651]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:36:31 np0005539065 python3.9[65735]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:36:31 np0005539065 systemd[1]: Stopping NTP client/server...
Nov 28 12:36:31 np0005539065 chronyd[793]: chronyd exiting
Nov 28 12:36:31 np0005539065 systemd[1]: chronyd.service: Deactivated successfully.
Nov 28 12:36:31 np0005539065 systemd[1]: Stopped NTP client/server.
Nov 28 12:36:31 np0005539065 systemd[1]: Starting NTP client/server...
Nov 28 12:36:31 np0005539065 chronyd[65743]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 28 12:36:31 np0005539065 chronyd[65743]: Frequency -23.714 +/- 0.293 ppm read from /var/lib/chrony/drift
Nov 28 12:36:31 np0005539065 chronyd[65743]: Loaded seccomp filter (level 2)
Nov 28 12:36:31 np0005539065 systemd[1]: Started NTP client/server.
Nov 28 12:36:32 np0005539065 systemd[1]: session-14.scope: Deactivated successfully.
Nov 28 12:36:32 np0005539065 systemd[1]: session-14.scope: Consumed 25.334s CPU time.
Nov 28 12:36:32 np0005539065 systemd-logind[790]: Session 14 logged out. Waiting for processes to exit.
Nov 28 12:36:32 np0005539065 systemd-logind[790]: Removed session 14.
Nov 28 12:36:37 np0005539065 systemd-logind[790]: New session 15 of user zuul.
Nov 28 12:36:37 np0005539065 systemd[1]: Started Session 15 of User zuul.
Nov 28 12:36:38 np0005539065 python3.9[65922]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:36:39 np0005539065 python3.9[66078]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:40 np0005539065 python3.9[66253]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:40 np0005539065 python3.9[66331]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.fy57oa4s recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:41 np0005539065 python3.9[66483]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:42 np0005539065 python3.9[66606]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351401.2994273-61-187739091247193/.source _original_basename=.ygxj3bng follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:43 np0005539065 python3.9[66758]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:36:43 np0005539065 python3.9[66910]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:44 np0005539065 python3.9[67033]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351403.2029448-85-53361461215623/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:36:44 np0005539065 python3.9[67185]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:45 np0005539065 python3.9[67308]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351404.3365803-85-116839475563103/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:36:46 np0005539065 python3.9[67460]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:46 np0005539065 python3.9[67612]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:47 np0005539065 python3.9[67735]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351406.242632-122-201428582813957/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:47 np0005539065 python3.9[67887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:48 np0005539065 python3.9[68010]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351407.3805466-137-8089427084292/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:49 np0005539065 python3.9[68162]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:36:49 np0005539065 systemd[1]: Reloading.
Nov 28 12:36:49 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:36:49 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:36:49 np0005539065 systemd[1]: Reloading.
Nov 28 12:36:49 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:36:49 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:36:49 np0005539065 systemd[1]: Starting EDPM Container Shutdown...
Nov 28 12:36:49 np0005539065 systemd[1]: Finished EDPM Container Shutdown.
Nov 28 12:36:50 np0005539065 python3.9[68389]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:51 np0005539065 python3.9[68512]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351410.0391705-160-185351472055647/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:51 np0005539065 python3.9[68664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:36:52 np0005539065 python3.9[68787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351411.1791842-175-19216033935362/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:36:52 np0005539065 python3.9[68939]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:36:52 np0005539065 systemd[1]: Reloading.
Nov 28 12:36:52 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:36:52 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:36:53 np0005539065 systemd[1]: Reloading.
Nov 28 12:36:53 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:36:53 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:36:53 np0005539065 systemd[1]: Starting Create netns directory...
Nov 28 12:36:53 np0005539065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 28 12:36:53 np0005539065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 28 12:36:53 np0005539065 systemd[1]: Finished Create netns directory.
Nov 28 12:36:54 np0005539065 python3.9[69166]: ansible-ansible.builtin.service_facts Invoked
Nov 28 12:36:54 np0005539065 network[69183]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 28 12:36:54 np0005539065 network[69184]: 'network-scripts' will be removed from distribution in near future.
Nov 28 12:36:54 np0005539065 network[69185]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 28 12:36:57 np0005539065 python3.9[69447]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:36:57 np0005539065 systemd[1]: Reloading.
Nov 28 12:36:57 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:36:57 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:36:57 np0005539065 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 28 12:36:57 np0005539065 iptables.init[69488]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 28 12:36:57 np0005539065 iptables.init[69488]: iptables: Flushing firewall rules: [  OK  ]
Nov 28 12:36:57 np0005539065 systemd[1]: iptables.service: Deactivated successfully.
Nov 28 12:36:57 np0005539065 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 28 12:36:58 np0005539065 python3.9[69684]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:36:59 np0005539065 python3.9[69838]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:36:59 np0005539065 systemd[1]: Reloading.
Nov 28 12:36:59 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:36:59 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:36:59 np0005539065 systemd[1]: Starting Netfilter Tables...
Nov 28 12:36:59 np0005539065 systemd[1]: Finished Netfilter Tables.
Nov 28 12:37:00 np0005539065 python3.9[70031]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:37:01 np0005539065 python3.9[70184]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:37:01 np0005539065 python3.9[70309]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351420.8388155-244-79992988319848/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:02 np0005539065 python3.9[70462]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:37:03 np0005539065 systemd[1]: Reloading OpenSSH server daemon...
Nov 28 12:37:03 np0005539065 systemd[1]: Reloaded OpenSSH server daemon.
Nov 28 12:37:04 np0005539065 python3.9[70618]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:05 np0005539065 python3.9[70770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:37:05 np0005539065 python3.9[70893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351424.5193274-275-53768800919772/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:06 np0005539065 python3.9[71045]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 28 12:37:06 np0005539065 systemd[1]: Starting Time & Date Service...
Nov 28 12:37:06 np0005539065 systemd[1]: Started Time & Date Service.
Nov 28 12:37:07 np0005539065 python3.9[71201]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:08 np0005539065 python3.9[71353]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:37:08 np0005539065 python3.9[71476]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351427.5172648-310-248010438783715/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:09 np0005539065 python3.9[71628]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:37:10 np0005539065 python3.9[71751]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351428.9396365-325-70680910644153/.source.yaml _original_basename=.c0xz2wl_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:10 np0005539065 python3.9[71903]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:37:11 np0005539065 python3.9[72026]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351430.321296-340-132409747687101/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:11 np0005539065 python3.9[72178]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:37:12 np0005539065 python3.9[72331]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:37:13 np0005539065 python3[72484]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 28 12:37:14 np0005539065 python3.9[72636]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:37:14 np0005539065 python3.9[72759]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351433.5904393-379-101053477530345/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:15 np0005539065 python3.9[72911]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:37:15 np0005539065 python3.9[73034]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351434.8408234-394-49808629022776/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:16 np0005539065 python3.9[73186]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:37:16 np0005539065 python3.9[73309]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351435.994997-409-53872698445005/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:17 np0005539065 python3.9[73461]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:37:18 np0005539065 python3.9[73584]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351437.1568217-424-16008462661705/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:18 np0005539065 python3.9[73736]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:37:19 np0005539065 python3.9[73859]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351438.272375-439-244119679753441/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:19 np0005539065 python3.9[74011]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:20 np0005539065 python3.9[74163]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:37:21 np0005539065 python3.9[74322]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:21 np0005539065 python3.9[74475]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:22 np0005539065 python3.9[74627]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:23 np0005539065 python3.9[74779]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 28 12:37:23 np0005539065 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 12:37:23 np0005539065 python3.9[74933]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 28 12:37:24 np0005539065 systemd[1]: session-15.scope: Deactivated successfully.
Nov 28 12:37:24 np0005539065 systemd-logind[790]: Session 15 logged out. Waiting for processes to exit.
Nov 28 12:37:24 np0005539065 systemd[1]: session-15.scope: Consumed 33.882s CPU time.
Nov 28 12:37:24 np0005539065 systemd-logind[790]: Removed session 15.
Nov 28 12:37:29 np0005539065 systemd-logind[790]: New session 16 of user zuul.
Nov 28 12:37:29 np0005539065 systemd[1]: Started Session 16 of User zuul.
Nov 28 12:37:30 np0005539065 python3.9[75114]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 28 12:37:31 np0005539065 python3.9[75266]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:37:32 np0005539065 python3.9[75418]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:37:33 np0005539065 python3.9[75570]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgxwgVRvc4DgIisWlYpnAip8uoAljSJsPssJMvRfMkfCz1DDUjdTjbdiF91zEMjNDkwXRZU+afFzkfQNT6rAPsjd2NSwF9gsYnl9Xkr3GDKRGWqnuddLyeShvZC9O+XjCx/7kpuEXg+hPtfTMzDFkAVuz8I0x9jdE9qLCL5XuozbB1MGK5YYPTaRJ64+zqL7UWHyx9tiyGTXYMp72hDiZv07Gp3cNvhd7ZOm4itti8HHrjvCBuH332ZEyj2ZnthmIdVWaOTh6V4QYbvmWxnGYvas53dMpvImg4FPgKASD9Ebol+eHjG7wlXQ6wshlQa7DNd7d9yZRMwIY3tekBTbxmMy+hnVKpohcc5KND6C2eX3qoI/S8lmuXW3p+QThD8Ywv/UmdKtG+r2Vk1mhgmLnMPtl5PbxaTRMTf1un1RKo+aPuT1v9kgn2lNkSeGSJpNK4EB6UhsVAStObLNrnygjAtI8L9knFdiKUZXAD1JEI9ZFaRVMS/UnaFwjL/3PMdQE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP3J0v4aV719vw2XqAbWgXZOVLglXkI95BWO3nqGv9wR#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNSeeqneRz1XVIEHnrcqXkqvvTKuXOq2wNKYpuLhyUZ+IE9H8cPPC6L2z3/6hJ+Ul7BFUg3gB8Gvbp76J1eUJss=#012 create=True mode=0644 path=/tmp/ansible.zpv5_a3b state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:33 np0005539065 python3.9[75722]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.zpv5_a3b' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:37:34 np0005539065 python3.9[75876]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.zpv5_a3b state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:35 np0005539065 systemd[1]: session-16.scope: Deactivated successfully.
Nov 28 12:37:35 np0005539065 systemd[1]: session-16.scope: Consumed 3.275s CPU time.
Nov 28 12:37:35 np0005539065 systemd-logind[790]: Session 16 logged out. Waiting for processes to exit.
Nov 28 12:37:35 np0005539065 systemd-logind[790]: Removed session 16.
Nov 28 12:37:36 np0005539065 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 28 12:37:40 np0005539065 systemd-logind[790]: New session 17 of user zuul.
Nov 28 12:37:40 np0005539065 systemd[1]: Started Session 17 of User zuul.
Nov 28 12:37:41 np0005539065 python3.9[76057]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:37:42 np0005539065 python3.9[76213]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 28 12:37:43 np0005539065 python3.9[76367]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:37:44 np0005539065 python3.9[76520]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:37:45 np0005539065 python3.9[76673]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:37:46 np0005539065 python3.9[76827]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:37:46 np0005539065 python3.9[76982]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:37:47 np0005539065 systemd[1]: session-17.scope: Deactivated successfully.
Nov 28 12:37:47 np0005539065 systemd[1]: session-17.scope: Consumed 4.298s CPU time.
Nov 28 12:37:47 np0005539065 systemd-logind[790]: Session 17 logged out. Waiting for processes to exit.
Nov 28 12:37:47 np0005539065 systemd-logind[790]: Removed session 17.
Nov 28 12:37:52 np0005539065 systemd-logind[790]: New session 18 of user zuul.
Nov 28 12:37:52 np0005539065 systemd[1]: Started Session 18 of User zuul.
Nov 28 12:37:53 np0005539065 python3.9[77160]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:37:54 np0005539065 python3.9[77316]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:37:55 np0005539065 python3.9[77400]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 28 12:37:57 np0005539065 python3.9[77551]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:37:59 np0005539065 python3.9[77702]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 28 12:37:59 np0005539065 python3.9[77852]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:38:00 np0005539065 python3.9[78002]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:38:00 np0005539065 systemd[1]: session-18.scope: Deactivated successfully.
Nov 28 12:38:00 np0005539065 systemd[1]: session-18.scope: Consumed 6.000s CPU time.
Nov 28 12:38:00 np0005539065 systemd-logind[790]: Session 18 logged out. Waiting for processes to exit.
Nov 28 12:38:00 np0005539065 systemd-logind[790]: Removed session 18.
Nov 28 12:38:06 np0005539065 systemd-logind[790]: New session 19 of user zuul.
Nov 28 12:38:06 np0005539065 systemd[1]: Started Session 19 of User zuul.
Nov 28 12:38:07 np0005539065 python3.9[78181]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:38:09 np0005539065 python3.9[78337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:10 np0005539065 python3.9[78489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:11 np0005539065 python3.9[78641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:12 np0005539065 python3.9[78764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351490.874629-65-265355841403419/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ce20e067fa3169dc8f453d525a388478ba213d5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:13 np0005539065 python3.9[78916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:13 np0005539065 python3.9[79039]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351492.626682-65-22411698286182/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=fbc62177460a3a26c6a91927c196b26525718e56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:14 np0005539065 python3.9[79191]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:14 np0005539065 python3.9[79314]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351493.82246-65-33673225733448/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=902127f6278f9df781b14233a510de240001a6f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:15 np0005539065 python3.9[79466]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:16 np0005539065 python3.9[79618]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:16 np0005539065 python3.9[79770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:17 np0005539065 python3.9[79893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351496.4585552-124-250793932579171/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6a48cceeba5d830613bfb929b20cc5b41c197354 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:18 np0005539065 python3.9[80045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:18 np0005539065 python3.9[80168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351497.7726943-124-249849201691795/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=fbc62177460a3a26c6a91927c196b26525718e56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:19 np0005539065 python3.9[80320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:19 np0005539065 python3.9[80443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351498.9349744-124-29660579021324/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=105b166024a8f6e8645173c39e32df6e31de66d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:20 np0005539065 python3.9[80595]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:21 np0005539065 python3.9[80747]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:21 np0005539065 python3.9[80899]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:22 np0005539065 python3.9[81022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351501.4453976-183-217065295809049/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ee149bcf0fedeeda9b4d646997fcc01d2976f0dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:22 np0005539065 python3.9[81174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:23 np0005539065 python3.9[81297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351502.5317838-183-254933749963053/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=16fb57405f5e2911fbc6470f1d13bc236b525274 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:24 np0005539065 python3.9[81449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:24 np0005539065 python3.9[81572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351503.7026262-183-156640344083485/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=cd23a6e68bf1565c9e56fee5e05eedd81544ac71 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:25 np0005539065 python3.9[81724]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:25 np0005539065 python3.9[81876]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:26 np0005539065 python3.9[82028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:27 np0005539065 python3.9[82151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351506.1585555-242-29632020926549/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6c6986628bdaa4f969384301f9da4235fd33ca23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:27 np0005539065 python3.9[82303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:28 np0005539065 python3.9[82426]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351507.4493296-242-79642165290030/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=01a30446cf972f13545d182dd26061dd5ea693bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:28 np0005539065 python3.9[82578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:29 np0005539065 python3.9[82701]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351508.5421686-242-136705252135543/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7e9894878509a2652bcee4fa392d0cc4a7658de6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:30 np0005539065 python3.9[82853]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:30 np0005539065 python3.9[83005]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:31 np0005539065 python3.9[83157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:31 np0005539065 python3.9[83280]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351510.857383-301-55434864097845/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b4ede4e145de688b30d6b5c455b52056df97c984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:32 np0005539065 python3.9[83432]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:33 np0005539065 python3.9[83555]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351512.0853922-301-3449830659583/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=16fb57405f5e2911fbc6470f1d13bc236b525274 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:33 np0005539065 python3.9[83707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:34 np0005539065 python3.9[83830]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351513.2200463-301-75528204329378/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8b54d94e7fb8ba91238d7bb7b4413be989adeaa8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:35 np0005539065 python3.9[83982]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:35 np0005539065 python3.9[84134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:36 np0005539065 python3.9[84257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351515.4754534-369-55720402643993/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6db368e8eab9994da74d1f7f8980fe4061371735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:37 np0005539065 python3.9[84409]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:37 np0005539065 python3.9[84561]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:38 np0005539065 python3.9[84684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351517.3526928-393-46227112007747/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6db368e8eab9994da74d1f7f8980fe4061371735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:38 np0005539065 python3.9[84836]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:39 np0005539065 python3.9[84988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:40 np0005539065 python3.9[85111]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351519.129941-417-220161611390885/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6db368e8eab9994da74d1f7f8980fe4061371735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:40 np0005539065 python3.9[85263]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:41 np0005539065 python3.9[85415]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:41 np0005539065 chronyd[65743]: Selected source 198.181.199.86 (pool.ntp.org)
Nov 28 12:38:41 np0005539065 python3.9[85538]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351520.9787915-441-17317189421249/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6db368e8eab9994da74d1f7f8980fe4061371735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:42 np0005539065 python3.9[85690]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:43 np0005539065 python3.9[85842]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:43 np0005539065 python3.9[85965]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351522.7987816-465-122700020054074/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6db368e8eab9994da74d1f7f8980fe4061371735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:44 np0005539065 python3.9[86117]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:44 np0005539065 python3.9[86269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:45 np0005539065 python3.9[86392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351524.5691953-489-35842948747877/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6db368e8eab9994da74d1f7f8980fe4061371735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:46 np0005539065 python3.9[86544]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:46 np0005539065 python3.9[86696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:47 np0005539065 python3.9[86819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351526.2537796-513-152383138952988/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6db368e8eab9994da74d1f7f8980fe4061371735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:47 np0005539065 python3.9[86971]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:48 np0005539065 python3.9[87123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:38:49 np0005539065 python3.9[87246]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351528.1246738-537-226420106655162/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6db368e8eab9994da74d1f7f8980fe4061371735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:38:49 np0005539065 systemd[1]: session-19.scope: Deactivated successfully.
Nov 28 12:38:49 np0005539065 systemd[1]: session-19.scope: Consumed 33.698s CPU time.
Nov 28 12:38:49 np0005539065 systemd-logind[790]: Session 19 logged out. Waiting for processes to exit.
Nov 28 12:38:49 np0005539065 systemd-logind[790]: Removed session 19.
Nov 28 12:38:55 np0005539065 systemd-logind[790]: New session 20 of user zuul.
Nov 28 12:38:55 np0005539065 systemd[1]: Started Session 20 of User zuul.
Nov 28 12:38:56 np0005539065 python3.9[87425]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:38:57 np0005539065 python3.9[87581]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:58 np0005539065 python3.9[87733]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:38:58 np0005539065 python3.9[87883]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:38:59 np0005539065 python3.9[88035]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 28 12:39:01 np0005539065 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 28 12:39:02 np0005539065 python3.9[88191]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:39:03 np0005539065 python3.9[88275]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:39:05 np0005539065 python3.9[88428]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 28 12:39:06 np0005539065 python3[88583]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 28 12:39:07 np0005539065 python3.9[88735]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:08 np0005539065 python3.9[88887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:08 np0005539065 python3.9[88965]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:09 np0005539065 python3.9[89117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:09 np0005539065 python3.9[89195]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.qo27qfc6 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:10 np0005539065 python3.9[89347]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:10 np0005539065 python3.9[89425]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:11 np0005539065 python3.9[89577]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:39:12 np0005539065 python3[89730]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 28 12:39:13 np0005539065 python3.9[89882]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:13 np0005539065 python3.9[90007]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351552.5691407-157-201339689382674/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:14 np0005539065 python3.9[90159]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:15 np0005539065 python3.9[90284]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351553.963329-172-172319201272976/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:15 np0005539065 python3.9[90436]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:16 np0005539065 python3.9[90561]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351555.2105293-187-40861924166984/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:17 np0005539065 python3.9[90713]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:17 np0005539065 python3.9[90838]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351556.6748097-202-205870433443396/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:18 np0005539065 python3.9[90990]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:19 np0005539065 python3.9[91115]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351557.9710932-217-100436638905684/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:19 np0005539065 python3.9[91267]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:20 np0005539065 python3.9[91419]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:39:21 np0005539065 python3.9[91574]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:21 np0005539065 python3.9[91726]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:39:22 np0005539065 python3.9[91879]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:39:23 np0005539065 python3.9[92033]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:39:23 np0005539065 python3.9[92188]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:25 np0005539065 python3.9[92338]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:39:26 np0005539065 python3.9[92491]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:39:26 np0005539065 ovs-vsctl[92492]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 28 12:39:26 np0005539065 python3.9[92644]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:39:27 np0005539065 python3.9[92799]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:39:27 np0005539065 ovs-vsctl[92800]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 28 12:39:28 np0005539065 python3.9[92950]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:39:28 np0005539065 python3.9[93104]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:39:29 np0005539065 python3.9[93256]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:30 np0005539065 python3.9[93334]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:39:30 np0005539065 python3.9[93486]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:31 np0005539065 python3.9[93564]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:39:32 np0005539065 python3.9[93716]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:32 np0005539065 python3.9[93868]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:33 np0005539065 python3.9[93946]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:34 np0005539065 python3.9[94098]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:34 np0005539065 python3.9[94176]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:35 np0005539065 python3.9[94328]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:39:35 np0005539065 systemd[1]: Reloading.
Nov 28 12:39:35 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:39:35 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:39:36 np0005539065 python3.9[94517]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:36 np0005539065 python3.9[94595]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:37 np0005539065 python3.9[94747]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:37 np0005539065 python3.9[94825]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:38 np0005539065 python3.9[94977]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:39:38 np0005539065 systemd[1]: Reloading.
Nov 28 12:39:38 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:39:38 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:39:39 np0005539065 systemd[1]: Starting Create netns directory...
Nov 28 12:39:39 np0005539065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 28 12:39:39 np0005539065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 28 12:39:39 np0005539065 systemd[1]: Finished Create netns directory.
Nov 28 12:39:39 np0005539065 python3.9[95170]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:39:40 np0005539065 python3.9[95322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:41 np0005539065 python3.9[95445]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351580.0707424-468-66720860768237/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:39:42 np0005539065 python3.9[95597]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:39:42 np0005539065 python3.9[95749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:39:43 np0005539065 python3.9[95872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351582.2806954-493-186935417634601/.source.json _original_basename=.u2gpi96y follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:44 np0005539065 python3.9[96024]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:46 np0005539065 python3.9[96451]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 28 12:39:47 np0005539065 python3.9[96603]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:39:48 np0005539065 python3.9[96755]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 28 12:39:48 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:39:49 np0005539065 python3[96920]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:39:49 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:39:49 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:39:50 np0005539065 podman[96956]: 2025-11-28 17:39:50.031395086 +0000 UTC m=+0.056729244 container create 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:39:50 np0005539065 podman[96956]: 2025-11-28 17:39:50.001300375 +0000 UTC m=+0.026634553 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 28 12:39:50 np0005539065 python3[96920]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 28 12:39:50 np0005539065 python3.9[97146]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:39:50 np0005539065 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 28 12:39:51 np0005539065 python3.9[97300]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:52 np0005539065 python3.9[97376]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:39:52 np0005539065 python3.9[97527]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764351592.0849526-581-258069687532120/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:39:53 np0005539065 python3.9[97603]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:39:53 np0005539065 systemd[1]: Reloading.
Nov 28 12:39:53 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:39:53 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:39:54 np0005539065 python3.9[97715]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:39:54 np0005539065 systemd[1]: Reloading.
Nov 28 12:39:54 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:39:54 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:39:54 np0005539065 systemd[1]: Starting ovn_controller container...
Nov 28 12:39:54 np0005539065 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 28 12:39:54 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:39:54 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8023be6b488cc65c037dd75396f32fb57af8ef11d3d69a11fd213de0973e9abc/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 28 12:39:54 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3.
Nov 28 12:39:54 np0005539065 podman[97756]: 2025-11-28 17:39:54.411014371 +0000 UTC m=+0.130575992 container init 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: + sudo -E kolla_set_configs
Nov 28 12:39:54 np0005539065 podman[97756]: 2025-11-28 17:39:54.43705113 +0000 UTC m=+0.156612731 container start 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 12:39:54 np0005539065 edpm-start-podman-container[97756]: ovn_controller
Nov 28 12:39:54 np0005539065 systemd[1]: Created slice User Slice of UID 0.
Nov 28 12:39:54 np0005539065 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 28 12:39:54 np0005539065 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 28 12:39:54 np0005539065 systemd[1]: Starting User Manager for UID 0...
Nov 28 12:39:54 np0005539065 edpm-start-podman-container[97755]: Creating additional drop-in dependency for "ovn_controller" (3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3)
Nov 28 12:39:54 np0005539065 podman[97777]: 2025-11-28 17:39:54.524241904 +0000 UTC m=+0.076479868 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 12:39:54 np0005539065 systemd[1]: 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3-1d60ecbca3532f40.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:39:54 np0005539065 systemd[1]: 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3-1d60ecbca3532f40.service: Failed with result 'exit-code'.
Nov 28 12:39:54 np0005539065 systemd[1]: Reloading.
Nov 28 12:39:54 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:39:54 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:39:54 np0005539065 systemd[97807]: Queued start job for default target Main User Target.
Nov 28 12:39:54 np0005539065 systemd[97807]: Created slice User Application Slice.
Nov 28 12:39:54 np0005539065 systemd[97807]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 28 12:39:54 np0005539065 systemd[97807]: Started Daily Cleanup of User's Temporary Directories.
Nov 28 12:39:54 np0005539065 systemd[97807]: Reached target Paths.
Nov 28 12:39:54 np0005539065 systemd[97807]: Reached target Timers.
Nov 28 12:39:54 np0005539065 systemd[97807]: Starting D-Bus User Message Bus Socket...
Nov 28 12:39:54 np0005539065 systemd[97807]: Starting Create User's Volatile Files and Directories...
Nov 28 12:39:54 np0005539065 systemd[97807]: Finished Create User's Volatile Files and Directories.
Nov 28 12:39:54 np0005539065 systemd[97807]: Listening on D-Bus User Message Bus Socket.
Nov 28 12:39:54 np0005539065 systemd[97807]: Reached target Sockets.
Nov 28 12:39:54 np0005539065 systemd[97807]: Reached target Basic System.
Nov 28 12:39:54 np0005539065 systemd[97807]: Reached target Main User Target.
Nov 28 12:39:54 np0005539065 systemd[97807]: Startup finished in 143ms.
Nov 28 12:39:54 np0005539065 systemd[1]: Started User Manager for UID 0.
Nov 28 12:39:54 np0005539065 systemd[1]: Started ovn_controller container.
Nov 28 12:39:54 np0005539065 systemd[1]: Started Session c1 of User root.
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: INFO:__main__:Validating config file
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: INFO:__main__:Writing out command to execute
Nov 28 12:39:54 np0005539065 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: ++ cat /run_command
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: + ARGS=
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: + sudo kolla_copy_cacerts
Nov 28 12:39:54 np0005539065 systemd[1]: Started Session c2 of User root.
Nov 28 12:39:54 np0005539065 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: + [[ ! -n '' ]]
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: + . kolla_extend_start
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: + umask 0022
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 28 12:39:54 np0005539065 NetworkManager[56307]: <info>  [1764351594.9280] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 28 12:39:54 np0005539065 NetworkManager[56307]: <info>  [1764351594.9291] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 12:39:54 np0005539065 NetworkManager[56307]: <info>  [1764351594.9305] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Nov 28 12:39:54 np0005539065 NetworkManager[56307]: <info>  [1764351594.9314] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Nov 28 12:39:54 np0005539065 NetworkManager[56307]: <info>  [1764351594.9318] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 28 12:39:54 np0005539065 kernel: br-int: entered promiscuous mode
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 28 12:39:54 np0005539065 ovn_controller[97771]: 2025-11-28T17:39:54Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 28 12:39:54 np0005539065 NetworkManager[56307]: <info>  [1764351594.9556] manager: (ovn-9157ad-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 28 12:39:54 np0005539065 kernel: genev_sys_6081: entered promiscuous mode
Nov 28 12:39:54 np0005539065 NetworkManager[56307]: <info>  [1764351594.9716] device (genev_sys_6081): carrier: link connected
Nov 28 12:39:54 np0005539065 NetworkManager[56307]: <info>  [1764351594.9721] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Nov 28 12:39:54 np0005539065 systemd-udevd[97927]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 12:39:54 np0005539065 systemd-udevd[97928]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 12:39:55 np0005539065 python3.9[98037]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:39:55 np0005539065 ovs-vsctl[98038]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 28 12:39:56 np0005539065 python3.9[98190]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:39:56 np0005539065 ovs-vsctl[98192]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 28 12:39:56 np0005539065 python3.9[98345]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:39:56 np0005539065 ovs-vsctl[98346]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 28 12:39:57 np0005539065 systemd[1]: session-20.scope: Deactivated successfully.
Nov 28 12:39:57 np0005539065 systemd[1]: session-20.scope: Consumed 46.178s CPU time.
Nov 28 12:39:57 np0005539065 systemd-logind[790]: Session 20 logged out. Waiting for processes to exit.
Nov 28 12:39:57 np0005539065 systemd-logind[790]: Removed session 20.
Nov 28 12:40:04 np0005539065 systemd-logind[790]: New session 22 of user zuul.
Nov 28 12:40:04 np0005539065 systemd[1]: Started Session 22 of User zuul.
Nov 28 12:40:04 np0005539065 systemd[1]: Stopping User Manager for UID 0...
Nov 28 12:40:04 np0005539065 systemd[97807]: Activating special unit Exit the Session...
Nov 28 12:40:04 np0005539065 systemd[97807]: Stopped target Main User Target.
Nov 28 12:40:04 np0005539065 systemd[97807]: Stopped target Basic System.
Nov 28 12:40:04 np0005539065 systemd[97807]: Stopped target Paths.
Nov 28 12:40:04 np0005539065 systemd[97807]: Stopped target Sockets.
Nov 28 12:40:04 np0005539065 systemd[97807]: Stopped target Timers.
Nov 28 12:40:04 np0005539065 systemd[97807]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 28 12:40:04 np0005539065 systemd[97807]: Closed D-Bus User Message Bus Socket.
Nov 28 12:40:04 np0005539065 systemd[97807]: Stopped Create User's Volatile Files and Directories.
Nov 28 12:40:04 np0005539065 systemd[97807]: Removed slice User Application Slice.
Nov 28 12:40:04 np0005539065 systemd[97807]: Reached target Shutdown.
Nov 28 12:40:04 np0005539065 systemd[97807]: Finished Exit the Session.
Nov 28 12:40:04 np0005539065 systemd[97807]: Reached target Exit the Session.
Nov 28 12:40:04 np0005539065 systemd[1]: user@0.service: Deactivated successfully.
Nov 28 12:40:04 np0005539065 systemd[1]: Stopped User Manager for UID 0.
Nov 28 12:40:04 np0005539065 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 28 12:40:04 np0005539065 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 28 12:40:04 np0005539065 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 28 12:40:04 np0005539065 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 28 12:40:04 np0005539065 systemd[1]: Removed slice User Slice of UID 0.
Nov 28 12:40:05 np0005539065 python3.9[98524]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:40:06 np0005539065 python3.9[98683]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:07 np0005539065 python3.9[98835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:07 np0005539065 python3.9[98987]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:08 np0005539065 python3.9[99139]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:08 np0005539065 python3.9[99291]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:10 np0005539065 python3.9[99441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:40:10 np0005539065 python3.9[99593]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 28 12:40:12 np0005539065 python3.9[99743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:13 np0005539065 python3.9[99864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351611.762391-86-30766071616784/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:14 np0005539065 python3.9[100015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:14 np0005539065 python3.9[100136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351613.3357253-101-68054223785228/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:15 np0005539065 python3.9[100288]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:40:16 np0005539065 python3.9[100372]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:40:18 np0005539065 python3.9[100525]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 28 12:40:19 np0005539065 python3.9[100678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:20 np0005539065 python3.9[100799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351619.2380576-138-205653929770857/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:21 np0005539065 python3.9[100949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:21 np0005539065 python3.9[101070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351620.6586127-138-175240087930962/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:23 np0005539065 python3.9[101220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:23 np0005539065 python3.9[101341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351622.6567342-182-88849922683765/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:24 np0005539065 python3.9[101491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:24 np0005539065 ovn_controller[97771]: 2025-11-28T17:40:24Z|00025|memory|INFO|16000 kB peak resident set size after 29.8 seconds
Nov 28 12:40:24 np0005539065 ovn_controller[97771]: 2025-11-28T17:40:24Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 28 12:40:24 np0005539065 podman[101586]: 2025-11-28 17:40:24.697212441 +0000 UTC m=+0.091390422 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 28 12:40:24 np0005539065 python3.9[101631]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351623.8214025-182-19685683715351/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:25 np0005539065 python3.9[101788]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:40:26 np0005539065 python3.9[101942]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:27 np0005539065 python3.9[102094]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:27 np0005539065 python3.9[102172]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:28 np0005539065 python3.9[102324]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:28 np0005539065 python3.9[102402]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:29 np0005539065 python3.9[102554]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:40:30 np0005539065 python3.9[102706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:30 np0005539065 python3.9[102784]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:40:31 np0005539065 python3.9[102936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:31 np0005539065 python3.9[103014]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:40:32 np0005539065 python3.9[103166]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:40:32 np0005539065 systemd[1]: Reloading.
Nov 28 12:40:32 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:40:32 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:40:33 np0005539065 python3.9[103355]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:34 np0005539065 python3.9[103433]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:40:34 np0005539065 python3.9[103585]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:35 np0005539065 python3.9[103663]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:40:35 np0005539065 python3.9[103815]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:40:35 np0005539065 systemd[1]: Reloading.
Nov 28 12:40:35 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:40:35 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:40:36 np0005539065 systemd[1]: Starting Create netns directory...
Nov 28 12:40:36 np0005539065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 28 12:40:36 np0005539065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 28 12:40:36 np0005539065 systemd[1]: Finished Create netns directory.
Nov 28 12:40:36 np0005539065 python3.9[104009]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:37 np0005539065 python3.9[104161]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:38 np0005539065 python3.9[104284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351637.1018734-333-135512641814106/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:38 np0005539065 python3.9[104436]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:40:39 np0005539065 python3.9[104588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:40:40 np0005539065 python3.9[104711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351639.1679766-358-212306687095579/.source.json _original_basename=.oyfjzygx follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:40:41 np0005539065 python3.9[104863]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:40:43 np0005539065 python3.9[105290]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 28 12:40:43 np0005539065 python3.9[105442]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:40:44 np0005539065 python3.9[105594]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 28 12:40:46 np0005539065 python3[105772]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:40:46 np0005539065 podman[105805]: 2025-11-28 17:40:46.29687144 +0000 UTC m=+0.045858365 container create b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:40:46 np0005539065 podman[105805]: 2025-11-28 17:40:46.27337957 +0000 UTC m=+0.022366515 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 12:40:46 np0005539065 python3[105772]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 12:40:46 np0005539065 python3.9[105996]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:40:47 np0005539065 python3.9[106150]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:40:48 np0005539065 python3.9[106226]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:40:48 np0005539065 python3.9[106377]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764351648.1295078-446-105385681161513/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:40:49 np0005539065 python3.9[106453]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:40:49 np0005539065 systemd[1]: Reloading.
Nov 28 12:40:49 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:40:49 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:40:50 np0005539065 python3.9[106563]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:40:50 np0005539065 systemd[1]: Reloading.
Nov 28 12:40:50 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:40:50 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:40:50 np0005539065 systemd[1]: Starting ovn_metadata_agent container...
Nov 28 12:40:50 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:40:50 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f65c08f327b90ac8c8e277d92bc365d3333aeac9e33a3f08733583583120bf/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 28 12:40:50 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f65c08f327b90ac8c8e277d92bc365d3333aeac9e33a3f08733583583120bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 12:40:50 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f.
Nov 28 12:40:50 np0005539065 podman[106604]: 2025-11-28 17:40:50.464972174 +0000 UTC m=+0.136100840 container init b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: + sudo -E kolla_set_configs
Nov 28 12:40:50 np0005539065 podman[106604]: 2025-11-28 17:40:50.495150908 +0000 UTC m=+0.166279554 container start b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 12:40:50 np0005539065 edpm-start-podman-container[106604]: ovn_metadata_agent
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Validating config file
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Copying service configuration files
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Writing out command to execute
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 28 12:40:50 np0005539065 edpm-start-podman-container[106603]: Creating additional drop-in dependency for "ovn_metadata_agent" (b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f)
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: ++ cat /run_command
Nov 28 12:40:50 np0005539065 podman[106625]: 2025-11-28 17:40:50.56315613 +0000 UTC m=+0.053406748 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: + CMD=neutron-ovn-metadata-agent
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: + ARGS=
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: + sudo kolla_copy_cacerts
Nov 28 12:40:50 np0005539065 systemd[1]: Reloading.
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: + [[ ! -n '' ]]
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: + . kolla_extend_start
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: Running command: 'neutron-ovn-metadata-agent'
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: + umask 0022
Nov 28 12:40:50 np0005539065 ovn_metadata_agent[106619]: + exec neutron-ovn-metadata-agent
Nov 28 12:40:50 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:40:50 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:40:50 np0005539065 systemd[1]: Started ovn_metadata_agent container.
Nov 28 12:40:51 np0005539065 systemd-logind[790]: Session 22 logged out. Waiting for processes to exit.
Nov 28 12:40:51 np0005539065 systemd[1]: session-22.scope: Deactivated successfully.
Nov 28 12:40:51 np0005539065 systemd[1]: session-22.scope: Consumed 34.989s CPU time.
Nov 28 12:40:51 np0005539065 systemd-logind[790]: Removed session 22.
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.532 106624 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.532 106624 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.532 106624 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.533 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.533 106624 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.533 106624 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.533 106624 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.533 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.534 106624 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.534 106624 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.534 106624 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.534 106624 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.534 106624 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.534 106624 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.535 106624 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.535 106624 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.535 106624 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.535 106624 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.535 106624 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.535 106624 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.535 106624 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.536 106624 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.536 106624 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.536 106624 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.536 106624 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.536 106624 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.536 106624 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.536 106624 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.536 106624 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.537 106624 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.537 106624 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.537 106624 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.537 106624 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.537 106624 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.537 106624 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.538 106624 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.538 106624 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.538 106624 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.538 106624 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.538 106624 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.538 106624 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.539 106624 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.539 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.539 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.539 106624 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.539 106624 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.539 106624 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.539 106624 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.539 106624 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.539 106624 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.540 106624 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.540 106624 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.540 106624 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.540 106624 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.540 106624 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.540 106624 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.540 106624 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.540 106624 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.541 106624 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.541 106624 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.541 106624 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.541 106624 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.541 106624 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.541 106624 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.541 106624 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.542 106624 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.542 106624 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.542 106624 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.542 106624 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.542 106624 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.542 106624 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.543 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.543 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.543 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.543 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.543 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.543 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.543 106624 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.544 106624 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.544 106624 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.544 106624 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.544 106624 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.544 106624 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.544 106624 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.544 106624 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.545 106624 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.545 106624 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.545 106624 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.545 106624 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.545 106624 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.545 106624 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.545 106624 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.546 106624 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.546 106624 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.546 106624 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.546 106624 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.546 106624 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.546 106624 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.546 106624 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.547 106624 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.547 106624 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.547 106624 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.547 106624 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.547 106624 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.547 106624 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.547 106624 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.548 106624 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.548 106624 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.548 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.548 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.548 106624 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.548 106624 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.548 106624 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.549 106624 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.549 106624 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.549 106624 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.549 106624 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.549 106624 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.549 106624 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.550 106624 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.550 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.550 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.550 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.550 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.550 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.551 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.551 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.551 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.551 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.551 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.551 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.551 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.552 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.552 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.552 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.552 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.552 106624 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.552 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.553 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.553 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.553 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.553 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.553 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.553 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.553 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.554 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.554 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.554 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.554 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.554 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.554 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.554 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.555 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.555 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.555 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.555 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.555 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.555 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.556 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.556 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.556 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.556 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.556 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.556 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.556 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.557 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.557 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.557 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.557 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.557 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.557 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.557 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.558 106624 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.558 106624 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.558 106624 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.558 106624 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.558 106624 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.558 106624 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.558 106624 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.559 106624 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.559 106624 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.559 106624 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.559 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.559 106624 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.559 106624 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.560 106624 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.560 106624 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.560 106624 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.560 106624 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.560 106624 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.560 106624 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.560 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.561 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.561 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.561 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.561 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.561 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.561 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.561 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.562 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.562 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.562 106624 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.562 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.562 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.562 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.562 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.563 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.563 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.563 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.563 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.563 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.563 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.563 106624 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.563 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.564 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.564 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.564 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.564 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.564 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.564 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.564 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.565 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.565 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.565 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.565 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.565 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.565 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.566 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.566 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.566 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.566 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.566 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.566 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.566 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.567 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.567 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.567 106624 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.567 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.567 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.567 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.567 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.568 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.568 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.568 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.568 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.568 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.568 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.569 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.569 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.569 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.569 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.569 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.569 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.570 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.570 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.570 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.570 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.570 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.570 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.570 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.571 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.571 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.571 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.571 106624 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.571 106624 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.572 106624 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.572 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.572 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.572 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.572 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.572 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.572 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.573 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.573 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.573 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.573 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.573 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.574 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.574 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.574 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.574 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.574 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.574 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.574 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.574 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.575 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.575 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.575 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.575 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.575 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.575 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.575 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.575 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.576 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.576 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.576 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.576 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.576 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.576 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.577 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.577 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.577 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.577 106624 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.577 106624 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.589 106624 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.589 106624 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.589 106624 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.590 106624 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.590 106624 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.605 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d60b742f-7e94-4137-b50a-cfc8eac54167 (UUID: d60b742f-7e94-4137-b50a-cfc8eac54167) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.638 106624 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.638 106624 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.638 106624 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.638 106624 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.642 106624 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.648 106624 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.653 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd60b742f-7e94-4137-b50a-cfc8eac54167'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], external_ids={}, name=d60b742f-7e94-4137-b50a-cfc8eac54167, nb_cfg_timestamp=1764351602958, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.654 106624 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fb303cb4160>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.654 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.655 106624 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.655 106624 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.655 106624 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.660 106624 DEBUG oslo_service.service [-] Started child 106729 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.663 106729 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-235838'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.663 106624 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpta4anop5/privsep.sock']#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.683 106729 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.683 106729 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.683 106729 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.686 106729 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.691 106729 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 28 12:40:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:52.697 106729 INFO eventlet.wsgi.server [-] (106729) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 28 12:40:53 np0005539065 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 28 12:40:53 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:53.402 106624 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 28 12:40:53 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:53.403 106624 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpta4anop5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 28 12:40:53 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:53.253 106734 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 28 12:40:53 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:53.257 106734 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 28 12:40:53 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:53.259 106734 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 28 12:40:53 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:53.259 106734 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106734#033[00m
Nov 28 12:40:53 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:53.406 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[f57bf6ec-775e-4c3e-a7f3-36f228dd227e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.011 106734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.011 106734 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.011 106734 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.626 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f0509a-cdbc-4488-9cd5-be44b704a8fb]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.629 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, column=external_ids, values=({'neutron:ovn-metadata-id': 'e068e4c8-d8b3-5958-8c8f-a24ea27ea33f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.649 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.655 106624 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.655 106624 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.655 106624 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.655 106624 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.655 106624 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.655 106624 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.656 106624 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.656 106624 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.656 106624 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.656 106624 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.656 106624 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.656 106624 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.656 106624 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.656 106624 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.656 106624 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.657 106624 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.657 106624 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.657 106624 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.657 106624 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.657 106624 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.657 106624 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.657 106624 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.657 106624 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.657 106624 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.658 106624 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.658 106624 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.658 106624 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.658 106624 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.658 106624 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.658 106624 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.658 106624 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.658 106624 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.659 106624 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.659 106624 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.659 106624 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.659 106624 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.659 106624 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.659 106624 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.659 106624 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.659 106624 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.660 106624 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.660 106624 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.660 106624 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.660 106624 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.660 106624 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.660 106624 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.660 106624 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.660 106624 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.660 106624 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.660 106624 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.661 106624 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.661 106624 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.661 106624 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.661 106624 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.661 106624 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.661 106624 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.661 106624 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.661 106624 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.662 106624 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.662 106624 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.662 106624 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.662 106624 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.662 106624 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.662 106624 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.662 106624 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.662 106624 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.662 106624 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.663 106624 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.663 106624 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.663 106624 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.663 106624 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.663 106624 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.663 106624 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.663 106624 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.663 106624 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.663 106624 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.664 106624 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.664 106624 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.664 106624 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.664 106624 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.664 106624 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.664 106624 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.664 106624 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.664 106624 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.664 106624 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.665 106624 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.665 106624 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.665 106624 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.665 106624 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.665 106624 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.665 106624 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.665 106624 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.665 106624 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.665 106624 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.665 106624 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.666 106624 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.666 106624 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.666 106624 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.666 106624 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.666 106624 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.666 106624 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.666 106624 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.666 106624 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.666 106624 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.666 106624 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.667 106624 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.667 106624 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.667 106624 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.667 106624 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.667 106624 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.667 106624 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.667 106624 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.667 106624 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.668 106624 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.668 106624 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.668 106624 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.668 106624 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.668 106624 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.668 106624 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.668 106624 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.668 106624 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.668 106624 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.669 106624 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.669 106624 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.669 106624 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.669 106624 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.669 106624 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.669 106624 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.669 106624 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.669 106624 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.669 106624 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.670 106624 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.670 106624 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.670 106624 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.670 106624 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.670 106624 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.670 106624 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.670 106624 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.670 106624 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.670 106624 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.671 106624 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.671 106624 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.671 106624 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.671 106624 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.671 106624 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.671 106624 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.671 106624 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.671 106624 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.671 106624 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.671 106624 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.672 106624 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.673 106624 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.673 106624 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.673 106624 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.673 106624 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.673 106624 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.673 106624 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.673 106624 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.673 106624 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.673 106624 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.674 106624 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.674 106624 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.674 106624 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.674 106624 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.674 106624 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.674 106624 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.674 106624 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.674 106624 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.674 106624 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.675 106624 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.675 106624 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.675 106624 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.675 106624 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.675 106624 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.675 106624 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.675 106624 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.675 106624 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.675 106624 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.676 106624 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.676 106624 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.676 106624 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.676 106624 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.676 106624 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.676 106624 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.676 106624 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.676 106624 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.676 106624 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.677 106624 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.677 106624 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.677 106624 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.677 106624 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.677 106624 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.677 106624 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.677 106624 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.677 106624 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.677 106624 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.677 106624 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.678 106624 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.678 106624 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.678 106624 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.678 106624 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.678 106624 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.678 106624 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.678 106624 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.678 106624 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.678 106624 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.678 106624 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.679 106624 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.679 106624 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.679 106624 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.679 106624 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.679 106624 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.679 106624 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.679 106624 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.679 106624 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.679 106624 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.679 106624 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.680 106624 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.680 106624 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.680 106624 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.680 106624 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.680 106624 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.680 106624 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.680 106624 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.680 106624 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.680 106624 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.680 106624 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.681 106624 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.681 106624 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.681 106624 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.681 106624 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.681 106624 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.681 106624 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.681 106624 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.681 106624 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.681 106624 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.682 106624 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.682 106624 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.682 106624 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.682 106624 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.682 106624 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.682 106624 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.682 106624 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.682 106624 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.682 106624 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.682 106624 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.683 106624 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.683 106624 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.683 106624 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.683 106624 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.683 106624 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.683 106624 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.683 106624 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.683 106624 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.683 106624 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.683 106624 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.684 106624 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.684 106624 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.684 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.684 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.684 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.684 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.685 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.685 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.685 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.685 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.685 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.685 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.685 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.685 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.686 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.686 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.686 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.686 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.686 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.686 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.686 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.686 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.686 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.687 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.687 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.687 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.687 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.687 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.687 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.687 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.687 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.687 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.687 106624 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.688 106624 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.688 106624 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.688 106624 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.688 106624 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:40:54 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:40:54.688 106624 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 28 12:40:55 np0005539065 podman[106739]: 2025-11-28 17:40:55.052286044 +0000 UTC m=+0.099128335 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:40:56 np0005539065 systemd-logind[790]: New session 23 of user zuul.
Nov 28 12:40:56 np0005539065 systemd[1]: Started Session 23 of User zuul.
Nov 28 12:40:57 np0005539065 python3.9[106917]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:40:58 np0005539065 python3.9[107073]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:41:00 np0005539065 python3.9[107238]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:41:00 np0005539065 systemd[1]: Reloading.
Nov 28 12:41:00 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:41:00 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:41:01 np0005539065 python3.9[107422]: ansible-ansible.builtin.service_facts Invoked
Nov 28 12:41:01 np0005539065 network[107439]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 28 12:41:01 np0005539065 network[107440]: 'network-scripts' will be removed from distribution in near future.
Nov 28 12:41:01 np0005539065 network[107441]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 28 12:41:05 np0005539065 python3.9[107702]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:41:05 np0005539065 python3.9[107855]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:41:06 np0005539065 python3.9[108008]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:41:07 np0005539065 python3.9[108161]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:41:08 np0005539065 python3.9[108314]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:41:08 np0005539065 python3.9[108467]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:41:09 np0005539065 python3.9[108620]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:41:10 np0005539065 python3.9[108773]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:11 np0005539065 python3.9[108925]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:11 np0005539065 python3.9[109077]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:12 np0005539065 python3.9[109229]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:13 np0005539065 python3.9[109381]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:13 np0005539065 python3.9[109533]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:14 np0005539065 python3.9[109685]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:15 np0005539065 python3.9[109837]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:15 np0005539065 python3.9[109989]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:16 np0005539065 python3.9[110141]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:17 np0005539065 python3.9[110293]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:17 np0005539065 python3.9[110445]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:18 np0005539065 python3.9[110597]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:18 np0005539065 python3.9[110749]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:41:19 np0005539065 python3.9[110901]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:41:20 np0005539065 python3.9[111053]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 28 12:41:20 np0005539065 podman[111177]: 2025-11-28 17:41:20.901940633 +0000 UTC m=+0.087289416 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 28 12:41:21 np0005539065 python3.9[111224]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:41:21 np0005539065 systemd[1]: Reloading.
Nov 28 12:41:21 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:41:21 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:41:22 np0005539065 python3.9[111411]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:41:22 np0005539065 python3.9[111564]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:41:23 np0005539065 python3.9[111717]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:41:24 np0005539065 python3.9[111870]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:41:24 np0005539065 python3.9[112023]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:41:25 np0005539065 podman[112148]: 2025-11-28 17:41:25.250370136 +0000 UTC m=+0.102671371 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 28 12:41:25 np0005539065 python3.9[112193]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:41:26 np0005539065 python3.9[112353]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:41:27 np0005539065 python3.9[112506]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 28 12:41:27 np0005539065 python3.9[112659]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 28 12:41:28 np0005539065 python3.9[112817]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 28 12:41:29 np0005539065 python3.9[112977]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:41:30 np0005539065 python3.9[113061]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:41:51 np0005539065 podman[113253]: 2025-11-28 17:41:51.03217295 +0000 UTC m=+0.088189991 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 28 12:41:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:41:52.579 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:41:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:41:52.580 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:41:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:41:52.580 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:41:56 np0005539065 podman[113273]: 2025-11-28 17:41:56.041911909 +0000 UTC m=+0.104720752 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 28 12:41:59 np0005539065 kernel: SELinux:  Converting 2758 SID table entries...
Nov 28 12:41:59 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 12:41:59 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 12:41:59 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 12:41:59 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 12:41:59 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 12:41:59 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 12:41:59 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 12:42:11 np0005539065 kernel: SELinux:  Converting 2758 SID table entries...
Nov 28 12:42:11 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 12:42:11 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 12:42:11 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 12:42:11 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 12:42:11 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 12:42:11 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 12:42:11 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 12:42:21 np0005539065 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 28 12:42:22 np0005539065 podman[113315]: 2025-11-28 17:42:22.018316539 +0000 UTC m=+0.061047508 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 28 12:42:27 np0005539065 podman[115336]: 2025-11-28 17:42:27.020901662 +0000 UTC m=+0.081039798 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 28 12:42:52 np0005539065 podman[130151]: 2025-11-28 17:42:52.419003995 +0000 UTC m=+0.048697640 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 28 12:42:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:42:52.581 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:42:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:42:52.581 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:42:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:42:52.582 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:42:58 np0005539065 podman[130196]: 2025-11-28 17:42:58.041911585 +0000 UTC m=+0.101156412 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 12:43:06 np0005539065 kernel: SELinux:  Converting 2759 SID table entries...
Nov 28 12:43:06 np0005539065 kernel: SELinux:  policy capability network_peer_controls=1
Nov 28 12:43:06 np0005539065 kernel: SELinux:  policy capability open_perms=1
Nov 28 12:43:06 np0005539065 kernel: SELinux:  policy capability extended_socket_class=1
Nov 28 12:43:06 np0005539065 kernel: SELinux:  policy capability always_check_network=0
Nov 28 12:43:06 np0005539065 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 28 12:43:06 np0005539065 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 28 12:43:06 np0005539065 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 28 12:43:07 np0005539065 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 28 12:43:07 np0005539065 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 28 12:43:07 np0005539065 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 28 12:43:15 np0005539065 systemd[1]: Stopping OpenSSH server daemon...
Nov 28 12:43:15 np0005539065 systemd[1]: sshd.service: Deactivated successfully.
Nov 28 12:43:15 np0005539065 systemd[1]: Stopped OpenSSH server daemon.
Nov 28 12:43:15 np0005539065 systemd[1]: sshd.service: Consumed 1.191s CPU time, read 32.0K from disk, written 4.0K to disk.
Nov 28 12:43:15 np0005539065 systemd[1]: Stopped target sshd-keygen.target.
Nov 28 12:43:15 np0005539065 systemd[1]: Stopping sshd-keygen.target...
Nov 28 12:43:15 np0005539065 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 28 12:43:15 np0005539065 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 28 12:43:15 np0005539065 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 28 12:43:15 np0005539065 systemd[1]: Reached target sshd-keygen.target.
Nov 28 12:43:15 np0005539065 systemd[1]: Starting OpenSSH server daemon...
Nov 28 12:43:15 np0005539065 systemd[1]: Started OpenSSH server daemon.
Nov 28 12:43:17 np0005539065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 28 12:43:17 np0005539065 systemd[1]: Starting man-db-cache-update.service...
Nov 28 12:43:17 np0005539065 systemd[1]: Reloading.
Nov 28 12:43:17 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:43:17 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:43:17 np0005539065 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 28 12:43:21 np0005539065 python3.9[134909]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 28 12:43:22 np0005539065 systemd[1]: Reloading.
Nov 28 12:43:22 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:43:22 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:43:22 np0005539065 podman[136113]: 2025-11-28 17:43:22.717583429 +0000 UTC m=+0.097606881 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 12:43:23 np0005539065 python3.9[136965]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 28 12:43:23 np0005539065 systemd[1]: Reloading.
Nov 28 12:43:23 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:43:23 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:43:25 np0005539065 python3.9[138260]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 28 12:43:25 np0005539065 systemd[1]: Reloading.
Nov 28 12:43:25 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:43:25 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:43:26 np0005539065 python3.9[139636]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 28 12:43:26 np0005539065 systemd[1]: Reloading.
Nov 28 12:43:26 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:43:26 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:43:26 np0005539065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 28 12:43:26 np0005539065 systemd[1]: Finished man-db-cache-update.service.
Nov 28 12:43:26 np0005539065 systemd[1]: man-db-cache-update.service: Consumed 11.816s CPU time.
Nov 28 12:43:26 np0005539065 systemd[1]: run-r4d5d286d77bf4dd9ac5cfe9a619e0514.service: Deactivated successfully.
Nov 28 12:43:27 np0005539065 python3.9[140550]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:27 np0005539065 systemd[1]: Reloading.
Nov 28 12:43:27 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:43:27 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:43:28 np0005539065 podman[140712]: 2025-11-28 17:43:28.5180007 +0000 UTC m=+0.100901160 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 28 12:43:28 np0005539065 python3.9[140755]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:28 np0005539065 systemd[1]: Reloading.
Nov 28 12:43:28 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:43:28 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:43:29 np0005539065 python3.9[140957]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:29 np0005539065 systemd[1]: Reloading.
Nov 28 12:43:29 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:43:29 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:43:30 np0005539065 python3.9[141147]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:31 np0005539065 python3.9[141302]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:31 np0005539065 systemd[1]: Reloading.
Nov 28 12:43:31 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:43:31 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:43:32 np0005539065 python3.9[141492]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 28 12:43:32 np0005539065 systemd[1]: Reloading.
Nov 28 12:43:32 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:43:32 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:43:33 np0005539065 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 28 12:43:33 np0005539065 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 28 12:43:33 np0005539065 python3.9[141685]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:34 np0005539065 python3.9[141840]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:35 np0005539065 python3.9[141995]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:36 np0005539065 python3.9[142150]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:37 np0005539065 python3.9[142305]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:37 np0005539065 python3.9[142460]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:38 np0005539065 python3.9[142615]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:39 np0005539065 python3.9[142770]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:40 np0005539065 python3.9[142925]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:41 np0005539065 python3.9[143080]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:41 np0005539065 python3.9[143235]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:42 np0005539065 python3.9[143390]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:43 np0005539065 python3.9[143545]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:44 np0005539065 python3.9[143700]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 28 12:43:45 np0005539065 python3.9[143855]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:43:45 np0005539065 python3.9[144007]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:43:46 np0005539065 python3.9[144159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:43:47 np0005539065 python3.9[144311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:43:47 np0005539065 python3.9[144463]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:43:48 np0005539065 python3.9[144615]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:43:49 np0005539065 python3.9[144767]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:43:50 np0005539065 python3.9[144892]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764351828.5038476-554-90758455929093/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:43:50 np0005539065 python3.9[145044]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:43:51 np0005539065 python3.9[145169]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764351830.232498-554-19600799914266/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:43:51 np0005539065 python3.9[145321]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:43:52 np0005539065 python3.9[145446]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764351831.3407834-554-123195923738163/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:43:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:43:52.582 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:43:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:43:52.583 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:43:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:43:52.583 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:43:52 np0005539065 podman[145596]: 2025-11-28 17:43:52.985613116 +0000 UTC m=+0.050942838 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 12:43:53 np0005539065 python3.9[145599]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:43:53 np0005539065 python3.9[145742]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764351832.566322-554-263493301109826/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:43:54 np0005539065 python3.9[145894]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:43:55 np0005539065 python3.9[146019]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764351833.8947504-554-103623161412467/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:43:55 np0005539065 python3.9[146171]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:43:56 np0005539065 python3.9[146296]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764351835.166322-554-259958214968487/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:43:56 np0005539065 python3.9[146448]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:43:57 np0005539065 python3.9[146571]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764351836.267361-554-56020478518078/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:43:58 np0005539065 python3.9[146723]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:43:58 np0005539065 python3.9[146848]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764351837.6643887-554-212900935272669/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:43:58 np0005539065 podman[146849]: 2025-11-28 17:43:58.757663804 +0000 UTC m=+0.076348909 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 28 12:43:59 np0005539065 python3.9[147026]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 28 12:43:59 np0005539065 python3.9[147179]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:00 np0005539065 python3.9[147331]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:01 np0005539065 python3.9[147483]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:01 np0005539065 python3.9[147635]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:02 np0005539065 python3.9[147787]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:03 np0005539065 python3.9[147939]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:03 np0005539065 python3.9[148091]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:04 np0005539065 python3.9[148243]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:05 np0005539065 python3.9[148395]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:05 np0005539065 python3.9[148547]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:06 np0005539065 python3.9[148699]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:06 np0005539065 python3.9[148851]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:07 np0005539065 python3.9[149003]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:08 np0005539065 python3.9[149155]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:08 np0005539065 python3.9[149307]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:09 np0005539065 python3.9[149430]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351848.1823401-775-41812104277289/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:09 np0005539065 python3.9[149582]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:10 np0005539065 python3.9[149705]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351849.3135037-775-275729445388028/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:11 np0005539065 python3.9[149857]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:11 np0005539065 python3.9[149980]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351850.5916727-775-154188527398251/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:12 np0005539065 python3.9[150132]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:12 np0005539065 python3.9[150255]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351851.6501439-775-168364902411676/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:13 np0005539065 python3.9[150407]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:13 np0005539065 python3.9[150530]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351852.7395036-775-119476736365934/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:14 np0005539065 python3.9[150682]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:14 np0005539065 python3.9[150805]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351853.883178-775-4262377642413/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:15 np0005539065 python3.9[150957]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:16 np0005539065 python3.9[151080]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351855.2069292-775-266730451270917/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:16 np0005539065 python3.9[151232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:17 np0005539065 python3.9[151355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351856.3990324-775-109346631457485/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:17 np0005539065 python3.9[151507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:18 np0005539065 python3.9[151630]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351857.468316-775-276622074493413/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:19 np0005539065 python3.9[151782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:19 np0005539065 python3.9[151905]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351858.6168458-775-101650722758557/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:20 np0005539065 python3.9[152057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:20 np0005539065 python3.9[152180]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351859.898838-775-227032583253244/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:21 np0005539065 python3.9[152332]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:22 np0005539065 python3.9[152455]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351861.0608706-775-279295597554206/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:22 np0005539065 python3.9[152607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:23 np0005539065 python3.9[152730]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351862.2287354-775-166883754111853/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:23 np0005539065 podman[152854]: 2025-11-28 17:44:23.776184936 +0000 UTC m=+0.076317152 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 28 12:44:23 np0005539065 python3.9[152901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:24 np0005539065 python3.9[153024]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351863.478626-775-189111764039098/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:25 np0005539065 python3.9[153174]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:44:25 np0005539065 python3.9[153329]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 28 12:44:27 np0005539065 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 28 12:44:27 np0005539065 python3.9[153485]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:28 np0005539065 python3.9[153637]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:29 np0005539065 podman[153737]: 2025-11-28 17:44:29.020026365 +0000 UTC m=+0.079746677 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 28 12:44:29 np0005539065 python3.9[153815]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:29 np0005539065 python3.9[153967]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:30 np0005539065 python3.9[154119]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:31 np0005539065 python3.9[154271]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:31 np0005539065 python3.9[154423]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:32 np0005539065 python3.9[154575]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:33 np0005539065 python3.9[154727]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:33 np0005539065 python3.9[154879]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:34 np0005539065 python3.9[155031]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:44:34 np0005539065 systemd[1]: Reloading.
Nov 28 12:44:34 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:44:34 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:44:34 np0005539065 systemd[1]: Starting libvirt logging daemon socket...
Nov 28 12:44:34 np0005539065 systemd[1]: Listening on libvirt logging daemon socket.
Nov 28 12:44:34 np0005539065 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 28 12:44:34 np0005539065 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 28 12:44:34 np0005539065 systemd[1]: Starting libvirt logging daemon...
Nov 28 12:44:34 np0005539065 systemd[1]: Started libvirt logging daemon.
Nov 28 12:44:35 np0005539065 python3.9[155225]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:44:35 np0005539065 systemd[1]: Reloading.
Nov 28 12:44:35 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:44:35 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:44:35 np0005539065 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 28 12:44:35 np0005539065 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 28 12:44:35 np0005539065 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 28 12:44:35 np0005539065 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 28 12:44:35 np0005539065 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 28 12:44:35 np0005539065 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 28 12:44:35 np0005539065 systemd[1]: Starting libvirt nodedev daemon...
Nov 28 12:44:35 np0005539065 systemd[1]: Started libvirt nodedev daemon.
Nov 28 12:44:36 np0005539065 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 28 12:44:36 np0005539065 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 28 12:44:36 np0005539065 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 28 12:44:36 np0005539065 python3.9[155442]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:44:36 np0005539065 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 28 12:44:36 np0005539065 systemd[1]: Reloading.
Nov 28 12:44:36 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:44:36 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:44:36 np0005539065 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 28 12:44:36 np0005539065 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 28 12:44:36 np0005539065 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 28 12:44:36 np0005539065 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 28 12:44:36 np0005539065 systemd[1]: Starting libvirt proxy daemon...
Nov 28 12:44:36 np0005539065 systemd[1]: Started libvirt proxy daemon.
Nov 28 12:44:37 np0005539065 setroubleshoot[155366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b1b8a78b-5a04-4ef9-8fc5-b6d359d7634c
Nov 28 12:44:37 np0005539065 setroubleshoot[155366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 28 12:44:37 np0005539065 setroubleshoot[155366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b1b8a78b-5a04-4ef9-8fc5-b6d359d7634c
Nov 28 12:44:37 np0005539065 setroubleshoot[155366]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 28 12:44:37 np0005539065 python3.9[155662]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:44:37 np0005539065 systemd[1]: Reloading.
Nov 28 12:44:37 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:44:37 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:44:37 np0005539065 systemd[1]: Listening on libvirt locking daemon socket.
Nov 28 12:44:37 np0005539065 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 28 12:44:37 np0005539065 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 28 12:44:37 np0005539065 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 28 12:44:37 np0005539065 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 28 12:44:37 np0005539065 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 28 12:44:37 np0005539065 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 28 12:44:37 np0005539065 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 28 12:44:37 np0005539065 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 28 12:44:37 np0005539065 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 28 12:44:37 np0005539065 systemd[1]: Starting libvirt QEMU daemon...
Nov 28 12:44:37 np0005539065 systemd[1]: Started libvirt QEMU daemon.
Nov 28 12:44:38 np0005539065 python3.9[155878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:44:38 np0005539065 systemd[1]: Reloading.
Nov 28 12:44:38 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:44:38 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:44:38 np0005539065 systemd[1]: Starting libvirt secret daemon socket...
Nov 28 12:44:38 np0005539065 systemd[1]: Listening on libvirt secret daemon socket.
Nov 28 12:44:38 np0005539065 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 28 12:44:38 np0005539065 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 28 12:44:38 np0005539065 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 28 12:44:38 np0005539065 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 28 12:44:38 np0005539065 systemd[1]: Starting libvirt secret daemon...
Nov 28 12:44:38 np0005539065 systemd[1]: Started libvirt secret daemon.
Nov 28 12:44:39 np0005539065 python3.9[156090]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:40 np0005539065 python3.9[156242]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 28 12:44:41 np0005539065 python3.9[156394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:41 np0005539065 python3.9[156517]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351880.838181-1120-161156171629966/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:42 np0005539065 python3.9[156669]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:43 np0005539065 python3.9[156821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:43 np0005539065 python3.9[156899]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:44 np0005539065 python3.9[157051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:44 np0005539065 python3.9[157129]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.w0im6lpd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:45 np0005539065 python3.9[157281]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:46 np0005539065 python3.9[157359]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:46 np0005539065 python3.9[157511]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:44:47 np0005539065 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 28 12:44:47 np0005539065 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.013s CPU time.
Nov 28 12:44:47 np0005539065 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 28 12:44:47 np0005539065 python3[157664]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 28 12:44:48 np0005539065 python3.9[157816]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:48 np0005539065 python3.9[157894]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:49 np0005539065 python3.9[158046]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:49 np0005539065 python3.9[158124]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:50 np0005539065 python3.9[158276]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:51 np0005539065 python3.9[158354]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:51 np0005539065 python3.9[158506]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:52 np0005539065 python3.9[158584]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:44:52.582 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:44:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:44:52.584 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:44:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:44:52.584 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:44:52 np0005539065 python3.9[158736]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:53 np0005539065 python3.9[158861]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764351892.4233003-1245-59843416780538/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:54 np0005539065 podman[158978]: 2025-11-28 17:44:54.012057493 +0000 UTC m=+0.065982004 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:44:54 np0005539065 python3.9[159032]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:54 np0005539065 python3.9[159185]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:44:55 np0005539065 python3.9[159340]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:56 np0005539065 python3.9[159492]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:44:57 np0005539065 python3.9[159645]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:44:57 np0005539065 python3.9[159799]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:44:58 np0005539065 python3.9[159954]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:44:59 np0005539065 python3.9[160106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:44:59 np0005539065 podman[160201]: 2025-11-28 17:44:59.727179252 +0000 UTC m=+0.087600953 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 28 12:44:59 np0005539065 python3.9[160248]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351898.7163672-1317-81159808092284/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:00 np0005539065 python3.9[160407]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:01 np0005539065 python3.9[160530]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351900.0680938-1332-16310307142347/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:01 np0005539065 python3.9[160682]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:02 np0005539065 python3.9[160805]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351901.3595233-1347-49406754536852/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:03 np0005539065 python3.9[160957]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:45:03 np0005539065 systemd[1]: Reloading.
Nov 28 12:45:03 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:45:03 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:45:03 np0005539065 systemd[1]: Reached target edpm_libvirt.target.
Nov 28 12:45:04 np0005539065 python3.9[161148]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 28 12:45:04 np0005539065 systemd[1]: Reloading.
Nov 28 12:45:04 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:45:04 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:45:04 np0005539065 systemd[1]: Reloading.
Nov 28 12:45:04 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:45:04 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:45:05 np0005539065 systemd[1]: session-23.scope: Deactivated successfully.
Nov 28 12:45:05 np0005539065 systemd[1]: session-23.scope: Consumed 3min 28.119s CPU time.
Nov 28 12:45:05 np0005539065 systemd-logind[790]: Session 23 logged out. Waiting for processes to exit.
Nov 28 12:45:05 np0005539065 systemd-logind[790]: Removed session 23.
Nov 28 12:45:10 np0005539065 systemd-logind[790]: New session 24 of user zuul.
Nov 28 12:45:10 np0005539065 systemd[1]: Started Session 24 of User zuul.
Nov 28 12:45:12 np0005539065 python3.9[161398]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:45:13 np0005539065 python3.9[161552]: ansible-ansible.builtin.service_facts Invoked
Nov 28 12:45:13 np0005539065 network[161569]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 28 12:45:13 np0005539065 network[161570]: 'network-scripts' will be removed from distribution in near future.
Nov 28 12:45:13 np0005539065 network[161571]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 28 12:45:17 np0005539065 python3.9[161842]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:45:18 np0005539065 python3.9[161926]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:45:24 np0005539065 podman[162051]: 2025-11-28 17:45:24.219912494 +0000 UTC m=+0.061336248 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 28 12:45:24 np0005539065 python3.9[162096]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:45:25 np0005539065 python3.9[162248]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:45:25 np0005539065 python3.9[162401]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:45:26 np0005539065 python3.9[162553]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:45:27 np0005539065 python3.9[162706]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:27 np0005539065 python3.9[162829]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351926.7037764-95-160624746125848/.source.iscsi _original_basename=.xsaxhltb follow=False checksum=07bd4c812193ec9524bdae7b13bf86ea5a1541f2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:28 np0005539065 python3.9[162981]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:30 np0005539065 podman[163134]: 2025-11-28 17:45:30.507337891 +0000 UTC m=+0.080308280 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 12:45:30 np0005539065 python3.9[163133]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:30 np0005539065 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 12:45:30 np0005539065 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 12:45:31 np0005539065 python3.9[163313]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:45:31 np0005539065 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 28 12:45:32 np0005539065 python3.9[163469]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:45:32 np0005539065 systemd[1]: Reloading.
Nov 28 12:45:32 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:45:32 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:45:32 np0005539065 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 28 12:45:32 np0005539065 systemd[1]: Starting Open-iSCSI...
Nov 28 12:45:32 np0005539065 kernel: Loading iSCSI transport class v2.0-870.
Nov 28 12:45:32 np0005539065 systemd[1]: Started Open-iSCSI.
Nov 28 12:45:33 np0005539065 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 28 12:45:33 np0005539065 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 28 12:45:33 np0005539065 python3.9[163669]: ansible-ansible.builtin.service_facts Invoked
Nov 28 12:45:33 np0005539065 network[163686]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 28 12:45:33 np0005539065 network[163687]: 'network-scripts' will be removed from distribution in near future.
Nov 28 12:45:33 np0005539065 network[163688]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 28 12:45:37 np0005539065 python3.9[163959]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 28 12:45:38 np0005539065 python3.9[164111]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 28 12:45:38 np0005539065 python3.9[164267]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:39 np0005539065 python3.9[164390]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351938.2894044-172-251935363937445/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:39 np0005539065 python3.9[164542]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:40 np0005539065 python3.9[164694]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:45:40 np0005539065 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 28 12:45:40 np0005539065 systemd[1]: Stopped Load Kernel Modules.
Nov 28 12:45:40 np0005539065 systemd[1]: Stopping Load Kernel Modules...
Nov 28 12:45:41 np0005539065 systemd[1]: Starting Load Kernel Modules...
Nov 28 12:45:41 np0005539065 systemd[1]: Finished Load Kernel Modules.
Nov 28 12:45:41 np0005539065 python3.9[164850]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:45:42 np0005539065 python3.9[165002]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:45:42 np0005539065 python3.9[165154]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:45:43 np0005539065 python3.9[165306]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:44 np0005539065 python3.9[165429]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351943.0985107-230-57748134136127/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:44 np0005539065 python3.9[165581]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:45:45 np0005539065 python3.9[165734]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:46 np0005539065 python3.9[165886]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:46 np0005539065 python3.9[166038]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:47 np0005539065 python3.9[166190]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:48 np0005539065 python3.9[166342]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:48 np0005539065 python3.9[166494]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:49 np0005539065 python3.9[166646]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:49 np0005539065 python3.9[166798]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:45:50 np0005539065 python3.9[166952]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:51 np0005539065 python3.9[167104]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:45:51 np0005539065 python3.9[167256]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:52 np0005539065 python3.9[167334]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:45:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:45:52.583 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:45:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:45:52.585 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:45:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:45:52.585 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:45:52 np0005539065 python3.9[167486]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:53 np0005539065 python3.9[167564]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:45:53 np0005539065 python3.9[167716]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:54 np0005539065 podman[167840]: 2025-11-28 17:45:54.395385743 +0000 UTC m=+0.062682410 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Nov 28 12:45:54 np0005539065 python3.9[167881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:55 np0005539065 python3.9[167965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:55 np0005539065 python3.9[168117]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:56 np0005539065 python3.9[168195]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:56 np0005539065 python3.9[168347]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:45:56 np0005539065 systemd[1]: Reloading.
Nov 28 12:45:56 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:45:56 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:45:57 np0005539065 python3.9[168536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:58 np0005539065 python3.9[168614]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:45:58 np0005539065 python3.9[168766]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:45:59 np0005539065 python3.9[168844]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:00 np0005539065 python3.9[168996]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:46:00 np0005539065 systemd[1]: Reloading.
Nov 28 12:46:00 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:46:00 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:46:00 np0005539065 systemd[1]: Starting Create netns directory...
Nov 28 12:46:00 np0005539065 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 28 12:46:00 np0005539065 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 28 12:46:00 np0005539065 systemd[1]: Finished Create netns directory.
Nov 28 12:46:01 np0005539065 podman[169138]: 2025-11-28 17:46:01.02862746 +0000 UTC m=+0.087055038 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 12:46:01 np0005539065 python3.9[169216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:46:01 np0005539065 python3.9[169368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:46:02 np0005539065 python3.9[169491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764351961.413996-437-190788056035785/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:46:03 np0005539065 python3.9[169643]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:46:04 np0005539065 python3.9[169795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:46:04 np0005539065 python3.9[169918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351963.5318801-462-94785477436761/.source.json _original_basename=.ejfex_0l follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:05 np0005539065 python3.9[170070]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:07 np0005539065 python3.9[170497]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 28 12:46:09 np0005539065 python3.9[170649]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:46:10 np0005539065 python3.9[170801]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 28 12:46:11 np0005539065 python3[170980]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:46:11 np0005539065 podman[171014]: 2025-11-28 17:46:11.845238895 +0000 UTC m=+0.045983735 container create bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 12:46:11 np0005539065 podman[171014]: 2025-11-28 17:46:11.820684655 +0000 UTC m=+0.021429515 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 28 12:46:11 np0005539065 python3[170980]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 28 12:46:12 np0005539065 python3.9[171204]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:46:13 np0005539065 python3.9[171358]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:13 np0005539065 python3.9[171434]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:46:14 np0005539065 python3.9[171585]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764351973.7995267-550-17282205176239/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:15 np0005539065 python3.9[171661]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:46:15 np0005539065 systemd[1]: Reloading.
Nov 28 12:46:15 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:46:15 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:46:16 np0005539065 python3.9[171772]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:46:16 np0005539065 systemd[1]: Reloading.
Nov 28 12:46:16 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:46:16 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:46:16 np0005539065 systemd[1]: Starting multipathd container...
Nov 28 12:46:16 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:46:16 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fdfc72672e512a88a8c1dd57ea7f3eb794316f082a513f96d1198fd4abd296/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 28 12:46:16 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fdfc72672e512a88a8c1dd57ea7f3eb794316f082a513f96d1198fd4abd296/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 28 12:46:16 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc.
Nov 28 12:46:16 np0005539065 podman[171812]: 2025-11-28 17:46:16.917203232 +0000 UTC m=+0.165681236 container init bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd)
Nov 28 12:46:16 np0005539065 multipathd[171827]: + sudo -E kolla_set_configs
Nov 28 12:46:16 np0005539065 podman[171812]: 2025-11-28 17:46:16.947498983 +0000 UTC m=+0.195976967 container start bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:46:16 np0005539065 podman[171812]: multipathd
Nov 28 12:46:16 np0005539065 systemd[1]: Started multipathd container.
Nov 28 12:46:17 np0005539065 multipathd[171827]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 28 12:46:17 np0005539065 multipathd[171827]: INFO:__main__:Validating config file
Nov 28 12:46:17 np0005539065 multipathd[171827]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 28 12:46:17 np0005539065 multipathd[171827]: INFO:__main__:Writing out command to execute
Nov 28 12:46:17 np0005539065 multipathd[171827]: ++ cat /run_command
Nov 28 12:46:17 np0005539065 multipathd[171827]: + CMD='/usr/sbin/multipathd -d'
Nov 28 12:46:17 np0005539065 multipathd[171827]: + ARGS=
Nov 28 12:46:17 np0005539065 multipathd[171827]: + sudo kolla_copy_cacerts
Nov 28 12:46:17 np0005539065 podman[171834]: 2025-11-28 17:46:17.029879459 +0000 UTC m=+0.068190914 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:46:17 np0005539065 multipathd[171827]: + [[ ! -n '' ]]
Nov 28 12:46:17 np0005539065 multipathd[171827]: + . kolla_extend_start
Nov 28 12:46:17 np0005539065 multipathd[171827]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 28 12:46:17 np0005539065 multipathd[171827]: Running command: '/usr/sbin/multipathd -d'
Nov 28 12:46:17 np0005539065 multipathd[171827]: + umask 0022
Nov 28 12:46:17 np0005539065 multipathd[171827]: + exec /usr/sbin/multipathd -d
Nov 28 12:46:17 np0005539065 systemd[1]: bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc-69af67606bd1d32b.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:46:17 np0005539065 systemd[1]: bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc-69af67606bd1d32b.service: Failed with result 'exit-code'.
Nov 28 12:46:17 np0005539065 multipathd[171827]: 3122.711224 | --------start up--------
Nov 28 12:46:17 np0005539065 multipathd[171827]: 3122.711248 | read /etc/multipath.conf
Nov 28 12:46:17 np0005539065 multipathd[171827]: 3122.717660 | path checkers start up
Nov 28 12:46:17 np0005539065 python3.9[172017]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:46:18 np0005539065 python3.9[172171]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:46:19 np0005539065 python3.9[172337]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:46:19 np0005539065 systemd[1]: Stopping multipathd container...
Nov 28 12:46:19 np0005539065 multipathd[171827]: 3125.131118 | exit (signal)
Nov 28 12:46:19 np0005539065 multipathd[171827]: 3125.131233 | --------shut down-------
Nov 28 12:46:19 np0005539065 systemd[1]: libpod-bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc.scope: Deactivated successfully.
Nov 28 12:46:19 np0005539065 podman[172341]: 2025-11-28 17:46:19.510909451 +0000 UTC m=+0.078732226 container died bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 12:46:19 np0005539065 systemd[1]: bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc-69af67606bd1d32b.timer: Deactivated successfully.
Nov 28 12:46:19 np0005539065 systemd[1]: Stopped /usr/bin/podman healthcheck run bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc.
Nov 28 12:46:19 np0005539065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc-userdata-shm.mount: Deactivated successfully.
Nov 28 12:46:19 np0005539065 systemd[1]: var-lib-containers-storage-overlay-81fdfc72672e512a88a8c1dd57ea7f3eb794316f082a513f96d1198fd4abd296-merged.mount: Deactivated successfully.
Nov 28 12:46:19 np0005539065 podman[172341]: 2025-11-28 17:46:19.566419229 +0000 UTC m=+0.134242004 container cleanup bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:46:19 np0005539065 podman[172341]: multipathd
Nov 28 12:46:19 np0005539065 podman[172370]: multipathd
Nov 28 12:46:19 np0005539065 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 28 12:46:19 np0005539065 systemd[1]: Stopped multipathd container.
Nov 28 12:46:19 np0005539065 systemd[1]: Starting multipathd container...
Nov 28 12:46:19 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:46:19 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fdfc72672e512a88a8c1dd57ea7f3eb794316f082a513f96d1198fd4abd296/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 28 12:46:19 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fdfc72672e512a88a8c1dd57ea7f3eb794316f082a513f96d1198fd4abd296/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 28 12:46:19 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc.
Nov 28 12:46:19 np0005539065 podman[172383]: 2025-11-28 17:46:19.787045357 +0000 UTC m=+0.108075704 container init bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 28 12:46:19 np0005539065 multipathd[172399]: + sudo -E kolla_set_configs
Nov 28 12:46:19 np0005539065 podman[172383]: 2025-11-28 17:46:19.81575427 +0000 UTC m=+0.136784617 container start bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:46:19 np0005539065 podman[172383]: multipathd
Nov 28 12:46:19 np0005539065 systemd[1]: Started multipathd container.
Nov 28 12:46:19 np0005539065 multipathd[172399]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 28 12:46:19 np0005539065 multipathd[172399]: INFO:__main__:Validating config file
Nov 28 12:46:19 np0005539065 multipathd[172399]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 28 12:46:19 np0005539065 multipathd[172399]: INFO:__main__:Writing out command to execute
Nov 28 12:46:19 np0005539065 multipathd[172399]: ++ cat /run_command
Nov 28 12:46:19 np0005539065 multipathd[172399]: + CMD='/usr/sbin/multipathd -d'
Nov 28 12:46:19 np0005539065 multipathd[172399]: + ARGS=
Nov 28 12:46:19 np0005539065 multipathd[172399]: + sudo kolla_copy_cacerts
Nov 28 12:46:19 np0005539065 podman[172406]: 2025-11-28 17:46:19.887336457 +0000 UTC m=+0.059757645 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd)
Nov 28 12:46:19 np0005539065 systemd[1]: bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc-536857017f1f003a.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:46:19 np0005539065 systemd[1]: bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc-536857017f1f003a.service: Failed with result 'exit-code'.
Nov 28 12:46:19 np0005539065 multipathd[172399]: + [[ ! -n '' ]]
Nov 28 12:46:19 np0005539065 multipathd[172399]: + . kolla_extend_start
Nov 28 12:46:19 np0005539065 multipathd[172399]: Running command: '/usr/sbin/multipathd -d'
Nov 28 12:46:19 np0005539065 multipathd[172399]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 28 12:46:19 np0005539065 multipathd[172399]: + umask 0022
Nov 28 12:46:19 np0005539065 multipathd[172399]: + exec /usr/sbin/multipathd -d
Nov 28 12:46:19 np0005539065 multipathd[172399]: 3125.575745 | --------start up--------
Nov 28 12:46:19 np0005539065 multipathd[172399]: 3125.575774 | read /etc/multipath.conf
Nov 28 12:46:19 np0005539065 multipathd[172399]: 3125.583024 | path checkers start up
Nov 28 12:46:20 np0005539065 python3.9[172590]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:21 np0005539065 python3.9[172742]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 28 12:46:22 np0005539065 python3.9[172894]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 28 12:46:22 np0005539065 kernel: Key type psk registered
Nov 28 12:46:22 np0005539065 python3.9[173059]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:46:23 np0005539065 python3.9[173182]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764351982.4033782-630-219737331806895/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:24 np0005539065 python3.9[173334]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:24 np0005539065 podman[173458]: 2025-11-28 17:46:24.722429997 +0000 UTC m=+0.057997481 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent)
Nov 28 12:46:25 np0005539065 python3.9[173505]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:46:25 np0005539065 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 28 12:46:25 np0005539065 systemd[1]: Stopped Load Kernel Modules.
Nov 28 12:46:25 np0005539065 systemd[1]: Stopping Load Kernel Modules...
Nov 28 12:46:25 np0005539065 systemd[1]: Starting Load Kernel Modules...
Nov 28 12:46:25 np0005539065 systemd[1]: Finished Load Kernel Modules.
Nov 28 12:46:25 np0005539065 python3.9[173662]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:46:28 np0005539065 systemd[1]: Reloading.
Nov 28 12:46:28 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:46:28 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:46:28 np0005539065 systemd[1]: Reloading.
Nov 28 12:46:28 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:46:28 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:46:29 np0005539065 systemd-logind[790]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 28 12:46:29 np0005539065 systemd-logind[790]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 28 12:46:29 np0005539065 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 28 12:46:29 np0005539065 systemd[1]: Starting man-db-cache-update.service...
Nov 28 12:46:29 np0005539065 systemd[1]: Reloading.
Nov 28 12:46:29 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:46:29 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:46:29 np0005539065 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 28 12:46:30 np0005539065 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 28 12:46:30 np0005539065 systemd[1]: Finished man-db-cache-update.service.
Nov 28 12:46:30 np0005539065 systemd[1]: man-db-cache-update.service: Consumed 1.763s CPU time.
Nov 28 12:46:30 np0005539065 systemd[1]: run-r338483aea4954496a4f8d54e3e8774a0.service: Deactivated successfully.
Nov 28 12:46:31 np0005539065 python3.9[175114]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:46:31 np0005539065 iscsid[163509]: iscsid shutting down.
Nov 28 12:46:31 np0005539065 systemd[1]: Stopping Open-iSCSI...
Nov 28 12:46:31 np0005539065 systemd[1]: iscsid.service: Deactivated successfully.
Nov 28 12:46:31 np0005539065 systemd[1]: Stopped Open-iSCSI.
Nov 28 12:46:31 np0005539065 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 28 12:46:31 np0005539065 systemd[1]: Starting Open-iSCSI...
Nov 28 12:46:31 np0005539065 systemd[1]: Started Open-iSCSI.
Nov 28 12:46:31 np0005539065 podman[175117]: 2025-11-28 17:46:31.348401243 +0000 UTC m=+0.101190314 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:46:32 np0005539065 python3.9[175297]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:46:33 np0005539065 python3.9[175453]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:34 np0005539065 python3.9[175605]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:46:34 np0005539065 systemd[1]: Reloading.
Nov 28 12:46:34 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:46:34 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:46:35 np0005539065 python3.9[175790]: ansible-ansible.builtin.service_facts Invoked
Nov 28 12:46:35 np0005539065 network[175807]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 28 12:46:35 np0005539065 network[175808]: 'network-scripts' will be removed from distribution in near future.
Nov 28 12:46:35 np0005539065 network[175809]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 28 12:46:36 np0005539065 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 28 12:46:36 np0005539065 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 28 12:46:37 np0005539065 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 28 12:46:38 np0005539065 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 28 12:46:40 np0005539065 python3.9[176087]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:46:40 np0005539065 python3.9[176240]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:46:41 np0005539065 python3.9[176393]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:46:42 np0005539065 python3.9[176546]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:46:43 np0005539065 python3.9[176699]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:46:43 np0005539065 python3.9[176852]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:46:44 np0005539065 python3.9[177005]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:46:45 np0005539065 python3.9[177158]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:46:46 np0005539065 python3.9[177311]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:46 np0005539065 python3.9[177463]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:47 np0005539065 python3.9[177615]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:48 np0005539065 python3.9[177767]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:48 np0005539065 python3.9[177919]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:49 np0005539065 python3.9[178071]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:49 np0005539065 python3.9[178223]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:50 np0005539065 podman[178224]: 2025-11-28 17:46:49.999403214 +0000 UTC m=+0.059874617 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 28 12:46:50 np0005539065 python3.9[178396]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:51 np0005539065 python3.9[178548]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:51 np0005539065 python3.9[178700]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:52 np0005539065 python3.9[178852]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:46:52.584 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:46:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:46:52.585 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:46:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:46:52.585 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:46:52 np0005539065 python3.9[179004]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:53 np0005539065 python3.9[179156]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:54 np0005539065 python3.9[179308]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:54 np0005539065 python3.9[179460]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:55 np0005539065 podman[179560]: 2025-11-28 17:46:55.003628396 +0000 UTC m=+0.058238638 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 12:46:55 np0005539065 python3.9[179630]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:46:55 np0005539065 python3.9[179782]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:46:56 np0005539065 python3.9[179934]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 28 12:46:57 np0005539065 python3.9[180086]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:46:57 np0005539065 systemd[1]: Reloading.
Nov 28 12:46:57 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:46:57 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:46:58 np0005539065 python3.9[180273]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:46:59 np0005539065 python3.9[180426]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:46:59 np0005539065 python3.9[180579]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:47:00 np0005539065 python3.9[180732]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:47:01 np0005539065 python3.9[180885]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:47:01 np0005539065 podman[181010]: 2025-11-28 17:47:01.853654534 +0000 UTC m=+0.093466222 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:47:01 np0005539065 python3.9[181057]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:47:02 np0005539065 python3.9[181218]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:47:03 np0005539065 python3.9[181371]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:47:04 np0005539065 python3.9[181524]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:05 np0005539065 python3.9[181676]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:06 np0005539065 python3.9[181828]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:06 np0005539065 python3.9[181980]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:07 np0005539065 python3.9[182132]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:08 np0005539065 python3.9[182284]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:09 np0005539065 python3.9[182436]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:09 np0005539065 python3.9[182588]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:10 np0005539065 python3.9[182740]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:10 np0005539065 python3.9[182892]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:15 np0005539065 python3.9[183044]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 28 12:47:16 np0005539065 python3.9[183197]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 28 12:47:17 np0005539065 python3.9[183355]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 28 12:47:18 np0005539065 systemd-logind[790]: New session 25 of user zuul.
Nov 28 12:47:18 np0005539065 systemd[1]: Started Session 25 of User zuul.
Nov 28 12:47:18 np0005539065 systemd-logind[790]: Session 25 logged out. Waiting for processes to exit.
Nov 28 12:47:18 np0005539065 systemd[1]: session-25.scope: Deactivated successfully.
Nov 28 12:47:18 np0005539065 systemd-logind[790]: Removed session 25.
Nov 28 12:47:19 np0005539065 python3.9[183541]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:47:19 np0005539065 python3.9[183662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352038.7824917-1229-209990293536397/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:20 np0005539065 podman[183786]: 2025-11-28 17:47:20.187073385 +0000 UTC m=+0.070012585 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Nov 28 12:47:20 np0005539065 python3.9[183824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:47:20 np0005539065 python3.9[183908]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:21 np0005539065 python3.9[184058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:47:21 np0005539065 python3.9[184179]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352040.927111-1229-125228921319587/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:22 np0005539065 python3.9[184329]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:47:23 np0005539065 python3.9[184450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352042.0892868-1229-113784673075883/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:23 np0005539065 python3.9[184600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:47:24 np0005539065 python3.9[184721]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352043.2351944-1229-175057694411137/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:24 np0005539065 python3.9[184871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:47:25 np0005539065 podman[184966]: 2025-11-28 17:47:25.184286586 +0000 UTC m=+0.053032280 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 28 12:47:25 np0005539065 python3.9[185004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352044.3221369-1229-199050985471089/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:25 np0005539065 python3.9[185162]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:47:26 np0005539065 python3.9[185314]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:47:27 np0005539065 python3.9[185466]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:47:27 np0005539065 python3.9[185618]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:47:28 np0005539065 python3.9[185741]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764352047.4024334-1336-22494113666636/.source _original_basename=.25m1ums5 follow=False checksum=4bb7bbf4bd066d59c8eed550739f07ef458bfa23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 28 12:47:29 np0005539065 python3.9[185893]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:47:29 np0005539065 python3.9[186045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:47:30 np0005539065 python3.9[186166]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352049.3143947-1362-75286364358593/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:30 np0005539065 python3.9[186316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:47:31 np0005539065 python3.9[186437]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352050.3462162-1377-158461043194213/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:47:32 np0005539065 podman[186561]: 2025-11-28 17:47:32.028942422 +0000 UTC m=+0.106958569 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 12:47:32 np0005539065 python3.9[186608]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 28 12:47:32 np0005539065 python3.9[186768]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:47:33 np0005539065 python3[186920]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:47:33 np0005539065 podman[186954]: 2025-11-28 17:47:33.857537077 +0000 UTC m=+0.049965774 container create 967d8e7c2c42bb06c716e4a93e5b8fe00f3b6de97c9a38b1e1bdeed06ab6ea27 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.build-date=20251125)
Nov 28 12:47:33 np0005539065 podman[186954]: 2025-11-28 17:47:33.831268094 +0000 UTC m=+0.023696821 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 28 12:47:33 np0005539065 python3[186920]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 28 12:47:34 np0005539065 python3.9[187143]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:47:35 np0005539065 python3.9[187297]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 28 12:47:36 np0005539065 python3.9[187449]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:47:37 np0005539065 python3[187601]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:47:37 np0005539065 podman[187639]: 2025-11-28 17:47:37.226460388 +0000 UTC m=+0.044246214 container create 075a1f79b5b341dc66cb95f46a3b8ef1787439828f52b00ffd0a54fb4aede071 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 28 12:47:37 np0005539065 podman[187639]: 2025-11-28 17:47:37.203446154 +0000 UTC m=+0.021232010 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 28 12:47:37 np0005539065 python3[187601]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 28 12:47:37 np0005539065 python3.9[187828]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:47:38 np0005539065 python3.9[187982]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:47:39 np0005539065 python3.9[188133]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764352058.6574283-1469-161433186711829/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:47:39 np0005539065 python3.9[188209]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:47:39 np0005539065 systemd[1]: Reloading.
Nov 28 12:47:39 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:47:39 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:47:40 np0005539065 python3.9[188320]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:47:40 np0005539065 systemd[1]: Reloading.
Nov 28 12:47:40 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:47:40 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:47:40 np0005539065 systemd[1]: Starting nova_compute container...
Nov 28 12:47:40 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:47:40 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:40 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:40 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:40 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:40 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:40 np0005539065 podman[188361]: 2025-11-28 17:47:40.996005849 +0000 UTC m=+0.103434624 container init 075a1f79b5b341dc66cb95f46a3b8ef1787439828f52b00ffd0a54fb4aede071 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 12:47:41 np0005539065 podman[188361]: 2025-11-28 17:47:41.001880753 +0000 UTC m=+0.109309518 container start 075a1f79b5b341dc66cb95f46a3b8ef1787439828f52b00ffd0a54fb4aede071 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible)
Nov 28 12:47:41 np0005539065 podman[188361]: nova_compute
Nov 28 12:47:41 np0005539065 nova_compute[188377]: + sudo -E kolla_set_configs
Nov 28 12:47:41 np0005539065 systemd[1]: Started nova_compute container.
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Validating config file
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Copying service configuration files
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Deleting /etc/ceph
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Creating directory /etc/ceph
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /etc/ceph
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Writing out command to execute
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 28 12:47:41 np0005539065 nova_compute[188377]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 28 12:47:41 np0005539065 nova_compute[188377]: ++ cat /run_command
Nov 28 12:47:41 np0005539065 nova_compute[188377]: + CMD=nova-compute
Nov 28 12:47:41 np0005539065 nova_compute[188377]: + ARGS=
Nov 28 12:47:41 np0005539065 nova_compute[188377]: + sudo kolla_copy_cacerts
Nov 28 12:47:41 np0005539065 nova_compute[188377]: + [[ ! -n '' ]]
Nov 28 12:47:41 np0005539065 nova_compute[188377]: + . kolla_extend_start
Nov 28 12:47:41 np0005539065 nova_compute[188377]: Running command: 'nova-compute'
Nov 28 12:47:41 np0005539065 nova_compute[188377]: + echo 'Running command: '\''nova-compute'\'''
Nov 28 12:47:41 np0005539065 nova_compute[188377]: + umask 0022
Nov 28 12:47:41 np0005539065 nova_compute[188377]: + exec nova-compute
Nov 28 12:47:41 np0005539065 python3.9[188538]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:47:42 np0005539065 python3.9[188689]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:47:43 np0005539065 nova_compute[188377]: 2025-11-28 17:47:43.087 188381 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 28 12:47:43 np0005539065 nova_compute[188377]: 2025-11-28 17:47:43.088 188381 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 28 12:47:43 np0005539065 nova_compute[188377]: 2025-11-28 17:47:43.088 188381 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 28 12:47:43 np0005539065 nova_compute[188377]: 2025-11-28 17:47:43.088 188381 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 28 12:47:43 np0005539065 python3.9[188841]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:47:43 np0005539065 nova_compute[188377]: 2025-11-28 17:47:43.238 188381 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 12:47:43 np0005539065 nova_compute[188377]: 2025-11-28 17:47:43.262 188381 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 12:47:43 np0005539065 nova_compute[188377]: 2025-11-28 17:47:43.263 188381 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 28 12:47:43 np0005539065 nova_compute[188377]: 2025-11-28 17:47:43.918 188381 INFO nova.virt.driver [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.044 188381 INFO nova.compute.provider_config [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.064 188381 DEBUG oslo_concurrency.lockutils [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.064 188381 DEBUG oslo_concurrency.lockutils [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.064 188381 DEBUG oslo_concurrency.lockutils [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.065 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.065 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.065 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.065 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.065 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.065 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.065 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.066 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.066 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.066 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.066 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.066 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.066 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.066 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.067 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.067 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.067 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.067 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.067 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.067 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.067 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.068 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.068 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.068 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.068 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.068 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.068 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.069 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.069 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.069 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.069 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.069 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.069 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.069 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.070 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.070 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.070 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.070 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.070 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.070 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.071 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.071 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.071 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.071 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.071 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.071 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.072 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.072 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.072 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.072 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.072 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.072 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.072 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.073 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.073 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.073 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.073 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.073 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.073 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.074 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.074 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.074 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.074 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.074 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.074 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.074 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.074 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.075 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.075 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.075 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.075 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.075 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.075 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.075 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.076 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.076 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.076 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.076 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.076 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.076 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.077 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.077 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.077 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.077 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.077 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.077 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.077 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.077 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.078 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.078 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.078 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.078 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.078 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.078 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.078 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.079 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.079 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.079 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.079 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.079 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.079 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.079 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.080 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.080 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.080 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.080 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.080 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.080 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.081 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.081 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.081 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.081 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.081 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.081 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.082 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.082 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.082 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.082 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.082 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.082 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.082 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.083 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.083 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.083 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.083 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.083 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.083 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.083 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.084 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.084 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.084 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.084 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.084 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.084 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.084 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.085 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.085 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.085 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.085 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.085 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.085 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.086 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.086 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.086 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.086 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.086 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.086 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.087 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.087 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.087 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.087 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.087 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.087 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.088 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.088 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.088 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.088 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.088 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.089 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.089 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.089 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.089 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.089 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.089 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.089 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.090 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.090 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.090 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.090 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.090 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.090 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.091 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.091 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.091 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.091 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.091 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.091 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.092 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.092 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.092 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.092 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.092 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.092 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.092 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.093 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.093 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.093 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.093 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.093 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.093 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.094 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.094 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.094 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.094 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.094 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.094 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.094 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.094 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.095 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.095 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.095 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.095 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.095 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.095 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.096 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.096 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.096 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.096 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.096 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.096 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.096 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.097 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.097 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.097 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.097 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.097 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.097 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.097 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.098 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.098 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.098 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.098 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.098 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.098 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.098 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.099 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.099 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.099 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.099 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.099 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.099 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.099 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.100 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.100 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.100 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.100 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.100 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.100 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.100 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.101 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.101 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.101 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.101 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.101 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.102 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.102 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.102 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.102 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.102 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.102 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.103 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.103 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.103 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.103 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.103 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.103 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.103 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.104 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.104 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.104 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.104 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.104 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.104 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.105 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.105 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.105 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.105 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.105 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.105 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.105 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.106 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.106 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.106 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.106 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.106 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.106 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.106 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.107 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.107 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.107 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.107 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.107 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.107 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.107 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.108 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.108 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.108 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.108 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.108 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.108 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.108 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.109 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.109 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.109 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.109 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.109 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.109 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.109 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.110 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.110 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.110 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.110 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.110 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.110 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.110 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.111 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.111 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.111 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.111 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.111 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.111 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.111 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.112 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.112 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.112 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.112 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.112 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.112 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.113 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.113 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.113 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.113 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.113 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.113 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.113 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.114 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.114 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.114 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.114 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.114 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.114 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.114 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.115 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.115 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.115 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.115 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.115 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.115 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.115 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.116 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.116 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.116 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.116 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.116 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.116 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.117 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.117 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.117 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.117 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.117 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.117 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.118 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.118 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.118 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.118 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.119 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.119 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.119 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.119 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.119 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.119 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.119 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.120 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.120 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.120 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.120 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.120 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.120 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.120 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.121 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.121 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.121 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.121 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.121 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.121 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.122 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.122 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.122 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.122 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.122 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.122 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.123 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.123 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.123 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.123 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.123 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.123 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.124 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.124 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.124 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.124 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.124 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.124 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.125 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.125 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.125 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.125 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.125 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.126 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.126 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.126 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.126 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.126 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.126 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.127 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.127 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.127 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.127 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.127 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.127 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.127 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.128 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.128 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.128 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.128 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.128 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.128 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.128 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.129 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.129 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.129 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.129 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.129 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.129 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.129 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.130 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.130 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.130 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.130 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.130 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.130 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.130 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.130 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.131 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.131 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.131 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.131 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.131 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.131 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.132 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.132 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.132 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.132 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.132 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.132 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.132 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.132 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.133 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.133 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.133 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.133 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.133 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.134 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.134 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.134 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.134 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.134 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.134 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.134 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.135 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.135 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.135 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.135 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.135 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.135 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.135 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.135 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.136 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.136 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.136 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.136 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.136 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.136 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.136 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.137 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.137 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.137 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.137 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.137 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.137 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.138 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.138 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.138 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.138 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.138 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.138 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.138 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.139 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.139 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.139 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.139 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.139 188381 WARNING oslo_config.cfg [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 28 12:47:44 np0005539065 nova_compute[188377]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 28 12:47:44 np0005539065 nova_compute[188377]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 28 12:47:44 np0005539065 nova_compute[188377]: and ``live_migration_inbound_addr`` respectively.
Nov 28 12:47:44 np0005539065 nova_compute[188377]: ).  Its value may be silently ignored in the future.#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.139 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.140 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.140 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.140 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.140 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.140 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.140 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.141 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.141 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.141 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.141 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.141 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.141 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.141 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.142 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.142 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.142 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.142 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.142 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.142 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.142 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.142 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.143 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.143 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.143 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.143 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.143 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.143 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.144 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.144 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.144 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.144 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.144 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.144 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.144 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.145 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.145 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.145 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.145 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.145 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.145 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.145 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.146 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.146 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.146 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.146 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.146 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.146 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.146 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.147 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.147 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.147 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.147 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.147 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.147 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.147 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.148 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.148 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.148 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.148 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.148 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.148 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.148 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.148 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.149 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.149 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.149 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.149 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.149 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.149 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.149 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.150 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.150 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.150 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.150 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.150 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.150 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.150 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.151 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.151 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.151 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.151 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.151 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.151 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.151 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.152 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.152 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.152 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.152 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.152 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.152 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.152 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.153 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.153 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.153 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.153 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.153 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.153 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.153 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.153 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.154 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.154 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.154 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.154 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.154 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.154 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.154 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.155 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.155 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.155 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.155 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.155 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.155 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.155 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.156 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.156 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.156 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.156 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.156 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.156 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.156 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.157 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.157 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.157 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.157 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.157 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.157 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.157 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.158 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.158 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.158 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.158 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.158 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.158 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.158 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.158 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.159 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.159 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.159 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.159 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.159 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.160 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.160 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.160 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.160 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.160 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.160 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.160 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.161 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.161 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.161 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.161 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.161 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.161 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.162 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.162 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.162 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.162 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.162 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.162 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.163 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.163 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.163 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.163 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.163 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.163 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.163 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.164 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.164 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.164 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.164 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.164 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.164 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.164 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.165 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.165 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.165 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.165 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.165 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.165 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.166 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.166 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.166 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.166 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.166 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.166 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.166 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.167 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.167 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.167 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.167 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.167 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.167 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.167 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.168 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.168 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.168 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.168 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.168 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.168 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.169 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.169 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.169 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.169 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.169 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.169 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.169 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.169 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.170 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.170 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.170 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.170 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.170 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.170 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.170 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.171 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.171 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.171 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.171 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.171 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.171 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.171 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.172 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.172 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.172 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.172 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.172 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.172 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.172 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.173 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.173 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.173 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.173 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.173 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.173 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.173 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.173 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.174 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.174 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.174 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.174 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.174 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.174 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.174 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.175 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.175 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.175 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.175 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.175 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.175 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.176 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.176 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.176 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.176 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.176 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.176 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.176 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.177 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.177 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.177 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.177 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.177 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.177 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.177 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.178 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.178 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.178 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.178 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.178 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.178 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.178 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.179 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.179 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.179 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.179 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.179 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.179 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.179 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.180 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.180 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.180 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.180 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.181 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.181 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.181 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.181 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.181 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.182 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.182 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.182 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.182 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.182 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.183 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.183 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.183 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.183 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.183 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.184 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.184 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.184 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.184 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.184 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.184 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.185 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.185 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.185 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.185 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.185 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.185 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.185 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.186 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.186 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.186 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.186 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.186 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.186 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.186 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.187 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.187 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.187 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.187 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.187 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.187 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.187 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.188 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.188 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.188 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.188 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.188 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.188 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.188 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.189 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.189 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.189 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.189 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.189 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.189 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.189 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.190 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.190 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.190 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.190 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.190 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.191 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.191 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.191 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.191 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.191 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.191 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.192 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.192 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.192 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.192 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.192 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.192 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.193 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.193 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.193 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.193 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.193 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.193 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.193 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.194 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.194 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.194 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.194 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.194 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.194 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.194 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.195 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.195 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.195 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.195 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.195 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.195 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.195 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.196 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.196 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.196 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.196 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.196 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.196 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.196 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.197 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.197 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.197 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.197 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.197 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.197 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.197 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.198 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.198 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.198 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.198 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.198 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.198 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.198 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.199 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.199 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.199 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.199 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.199 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.199 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.199 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.200 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.200 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.200 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.200 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.200 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.200 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.200 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.201 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.201 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.201 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.201 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.201 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.201 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.201 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.202 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.202 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.202 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.202 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.202 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.202 188381 DEBUG oslo_service.service [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.203 188381 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 28 12:47:44 np0005539065 python3.9[188995]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.219 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.220 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.221 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.221 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 28 12:47:44 np0005539065 systemd[1]: Starting libvirt QEMU daemon...
Nov 28 12:47:44 np0005539065 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 12:47:44 np0005539065 systemd[1]: Started libvirt QEMU daemon.
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.295 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f7f78d5d250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.298 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f7f78d5d250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.298 188381 INFO nova.virt.libvirt.driver [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.316 188381 WARNING nova.virt.libvirt.driver [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 28 12:47:44 np0005539065 nova_compute[188377]: 2025-11-28 17:47:44.316 188381 DEBUG nova.virt.libvirt.volume.mount [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.112 188381 INFO nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Libvirt host capabilities <capabilities>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <host>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <uuid>23602de7-dd9c-46ae-9cba-a45f7911b9d9</uuid>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <cpu>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <arch>x86_64</arch>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model>EPYC-Rome-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <vendor>AMD</vendor>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <microcode version='16777317'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <signature family='23' model='49' stepping='0'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='x2apic'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='tsc-deadline'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='osxsave'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='hypervisor'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='tsc_adjust'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='spec-ctrl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='stibp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='arch-capabilities'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='cmp_legacy'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='topoext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='virt-ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='lbrv'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='tsc-scale'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='vmcb-clean'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='pause-filter'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='pfthreshold'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='svme-addr-chk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='rdctl-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='skip-l1dfl-vmentry'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='mds-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature name='pschange-mc-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <pages unit='KiB' size='4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <pages unit='KiB' size='2048'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <pages unit='KiB' size='1048576'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </cpu>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <power_management>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <suspend_mem/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <suspend_disk/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <suspend_hybrid/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </power_management>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <iommu support='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <migration_features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <live/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <uri_transports>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <uri_transport>tcp</uri_transport>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <uri_transport>rdma</uri_transport>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </uri_transports>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </migration_features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <topology>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <cells num='1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <cell id='0'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:          <memory unit='KiB'>7864324</memory>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:          <pages unit='KiB' size='4'>1966081</pages>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:          <pages unit='KiB' size='2048'>0</pages>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:          <distances>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:            <sibling id='0' value='10'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:          </distances>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:          <cpus num='8'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:          </cpus>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        </cell>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </cells>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </topology>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <cache>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </cache>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <secmodel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model>selinux</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <doi>0</doi>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </secmodel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <secmodel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model>dac</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <doi>0</doi>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </secmodel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </host>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <guest>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <os_type>hvm</os_type>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <arch name='i686'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <wordsize>32</wordsize>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <domain type='qemu'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <domain type='kvm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </arch>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <pae/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <nonpae/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <acpi default='on' toggle='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <apic default='on' toggle='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <cpuselection/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <deviceboot/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <disksnapshot default='on' toggle='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <externalSnapshot/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </guest>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <guest>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <os_type>hvm</os_type>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <arch name='x86_64'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <wordsize>64</wordsize>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <domain type='qemu'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <domain type='kvm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </arch>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <acpi default='on' toggle='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <apic default='on' toggle='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <cpuselection/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <deviceboot/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <disksnapshot default='on' toggle='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <externalSnapshot/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </guest>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 
Nov 28 12:47:45 np0005539065 nova_compute[188377]: </capabilities>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: #033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.126 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.153 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 28 12:47:45 np0005539065 nova_compute[188377]: <domainCapabilities>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <path>/usr/libexec/qemu-kvm</path>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <domain>kvm</domain>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <arch>i686</arch>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <vcpu max='240'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <iothreads supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <os supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <enum name='firmware'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <loader supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>rom</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pflash</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='readonly'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>yes</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>no</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='secure'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>no</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </loader>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </os>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <cpu>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='host-passthrough' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='hostPassthroughMigratable'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>on</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>off</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='maximum' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='maximumMigratable'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>on</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>off</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='host-model' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <vendor>AMD</vendor>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='x2apic'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc-deadline'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='hypervisor'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc_adjust'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='spec-ctrl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='stibp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='cmp_legacy'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='overflow-recov'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='succor'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='amd-ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='virt-ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='lbrv'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc-scale'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='vmcb-clean'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='flushbyasid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='pause-filter'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='pfthreshold'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='svme-addr-chk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='disable' name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='custom' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Dhyana-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Genoa'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='auto-ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Genoa-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='auto-ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-128'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-256'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-512'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v6'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v7'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='KnightsMill'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512er'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512pf'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='KnightsMill-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512er'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512pf'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G4-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tbm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G5-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tbm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SierraForest'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cmpccxadd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SierraForest-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cmpccxadd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='athlon'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='athlon-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='core2duo'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='core2duo-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='coreduo'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='coreduo-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='n270'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='n270-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='phenom'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='phenom-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </cpu>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <memoryBacking supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <enum name='sourceType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>file</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>anonymous</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>memfd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </memoryBacking>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <devices>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <disk supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='diskDevice'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>disk</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>cdrom</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>floppy</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>lun</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='bus'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>ide</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>fdc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>scsi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>sata</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-non-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </disk>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <graphics supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vnc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>egl-headless</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dbus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </graphics>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <video supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='modelType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vga</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>cirrus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>none</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>bochs</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>ramfb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </video>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <hostdev supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='mode'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>subsystem</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='startupPolicy'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>default</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>mandatory</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>requisite</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>optional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='subsysType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pci</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>scsi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='capsType'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='pciBackend'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </hostdev>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <rng supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-non-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>random</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>egd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>builtin</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </rng>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <filesystem supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='driverType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>path</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>handle</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtiofs</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </filesystem>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <tpm supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tpm-tis</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tpm-crb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>emulator</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>external</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendVersion'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>2.0</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </tpm>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <redirdev supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='bus'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </redirdev>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <channel supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pty</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>unix</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </channel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <crypto supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>qemu</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>builtin</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </crypto>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <interface supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>default</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>passt</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </interface>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <panic supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>isa</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>hyperv</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </panic>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <console supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>null</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pty</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dev</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>file</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pipe</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>stdio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>udp</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tcp</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>unix</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>qemu-vdagent</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dbus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </console>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </devices>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <gic supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <vmcoreinfo supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <genid supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <backingStoreInput supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <backup supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <async-teardown supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <ps2 supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <sev supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <sgx supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <hyperv supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='features'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>relaxed</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vapic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>spinlocks</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vpindex</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>runtime</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>synic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>stimer</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>reset</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vendor_id</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>frequencies</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>reenlightenment</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tlbflush</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>ipi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>avic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>emsr_bitmap</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>xmm_input</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <defaults>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <spinlocks>4095</spinlocks>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <stimer_direct>on</stimer_direct>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <tlbflush_direct>on</tlbflush_direct>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <tlbflush_extended>on</tlbflush_extended>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </defaults>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </hyperv>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <launchSecurity supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='sectype'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tdx</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </launchSecurity>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: </domainCapabilities>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.164 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 28 12:47:45 np0005539065 nova_compute[188377]: <domainCapabilities>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <path>/usr/libexec/qemu-kvm</path>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <domain>kvm</domain>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <arch>i686</arch>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <vcpu max='4096'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <iothreads supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <os supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <enum name='firmware'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <loader supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>rom</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pflash</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='readonly'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>yes</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>no</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='secure'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>no</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </loader>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </os>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <cpu>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='host-passthrough' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='hostPassthroughMigratable'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>on</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>off</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='maximum' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='maximumMigratable'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>on</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>off</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='host-model' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <vendor>AMD</vendor>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='x2apic'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc-deadline'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='hypervisor'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc_adjust'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='spec-ctrl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='stibp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='cmp_legacy'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='overflow-recov'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='succor'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='amd-ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='virt-ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='lbrv'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc-scale'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='vmcb-clean'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='flushbyasid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='pause-filter'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='pfthreshold'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='svme-addr-chk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='disable' name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='custom' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 python3.9[189229]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Dhyana-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Genoa'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='auto-ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Genoa-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='auto-ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-128'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-256'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-512'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v6'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v7'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='KnightsMill'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512er'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512pf'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='KnightsMill-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512er'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512pf'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G4-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tbm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G5-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tbm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SierraForest'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cmpccxadd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SierraForest-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cmpccxadd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='athlon'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='athlon-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='core2duo'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='core2duo-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='coreduo'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='coreduo-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='n270'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='n270-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='phenom'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='phenom-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </cpu>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <memoryBacking supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <enum name='sourceType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>file</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>anonymous</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>memfd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </memoryBacking>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <devices>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <disk supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='diskDevice'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>disk</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>cdrom</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>floppy</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>lun</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='bus'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>fdc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>scsi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>sata</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-non-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </disk>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <graphics supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vnc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>egl-headless</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dbus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </graphics>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <video supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='modelType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vga</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>cirrus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>none</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>bochs</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>ramfb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </video>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <hostdev supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='mode'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>subsystem</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='startupPolicy'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>default</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>mandatory</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>requisite</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>optional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='subsysType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pci</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>scsi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='capsType'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='pciBackend'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </hostdev>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <rng supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-non-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>random</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>egd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>builtin</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </rng>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <filesystem supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='driverType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>path</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>handle</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtiofs</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </filesystem>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <tpm supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tpm-tis</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tpm-crb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>emulator</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>external</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendVersion'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>2.0</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </tpm>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <redirdev supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='bus'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </redirdev>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <channel supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pty</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>unix</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </channel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <crypto supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>qemu</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>builtin</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </crypto>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <interface supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>default</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>passt</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </interface>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <panic supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>isa</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>hyperv</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </panic>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <console supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>null</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pty</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dev</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>file</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pipe</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>stdio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>udp</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tcp</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>unix</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>qemu-vdagent</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dbus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </console>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </devices>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <gic supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <vmcoreinfo supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <genid supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <backingStoreInput supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <backup supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <async-teardown supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <ps2 supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <sev supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <sgx supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <hyperv supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='features'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>relaxed</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vapic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>spinlocks</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vpindex</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>runtime</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>synic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>stimer</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>reset</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vendor_id</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>frequencies</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>reenlightenment</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tlbflush</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>ipi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>avic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>emsr_bitmap</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>xmm_input</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <defaults>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <spinlocks>4095</spinlocks>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <stimer_direct>on</stimer_direct>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <tlbflush_direct>on</tlbflush_direct>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <tlbflush_extended>on</tlbflush_extended>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </defaults>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </hyperv>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <launchSecurity supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='sectype'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tdx</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </launchSecurity>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: </domainCapabilities>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.222 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.226 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 28 12:47:45 np0005539065 nova_compute[188377]: <domainCapabilities>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <path>/usr/libexec/qemu-kvm</path>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <domain>kvm</domain>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <arch>x86_64</arch>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <vcpu max='240'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <iothreads supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <os supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <enum name='firmware'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <loader supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>rom</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pflash</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='readonly'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>yes</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>no</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='secure'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>no</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </loader>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </os>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <cpu>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='host-passthrough' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='hostPassthroughMigratable'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>on</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>off</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='maximum' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='maximumMigratable'>
Nov 28 12:47:45 np0005539065 systemd[1]: Stopping nova_compute container...
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>on</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>off</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='host-model' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <vendor>AMD</vendor>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='x2apic'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc-deadline'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='hypervisor'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc_adjust'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='spec-ctrl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='stibp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='cmp_legacy'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='overflow-recov'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='succor'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='amd-ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='virt-ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='lbrv'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc-scale'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='vmcb-clean'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='flushbyasid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='pause-filter'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='pfthreshold'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='svme-addr-chk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='disable' name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='custom' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Dhyana-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Genoa'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='auto-ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Genoa-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='auto-ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-128'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-256'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-512'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v6'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v7'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='KnightsMill'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512er'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512pf'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='KnightsMill-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512er'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512pf'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G4-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tbm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G5-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tbm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SierraForest'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cmpccxadd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SierraForest-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cmpccxadd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='athlon'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='athlon-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='core2duo'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='core2duo-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='coreduo'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='coreduo-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='n270'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='n270-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='phenom'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='phenom-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </cpu>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <memoryBacking supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <enum name='sourceType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>file</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>anonymous</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>memfd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </memoryBacking>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <devices>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <disk supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='diskDevice'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>disk</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>cdrom</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>floppy</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>lun</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='bus'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>ide</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>fdc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>scsi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>sata</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-non-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </disk>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <graphics supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vnc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>egl-headless</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dbus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </graphics>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <video supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='modelType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vga</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>cirrus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>none</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>bochs</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>ramfb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </video>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <hostdev supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='mode'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>subsystem</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='startupPolicy'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>default</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>mandatory</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>requisite</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>optional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='subsysType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pci</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>scsi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='capsType'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='pciBackend'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </hostdev>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <rng supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-non-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>random</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>egd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>builtin</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </rng>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <filesystem supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='driverType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>path</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>handle</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtiofs</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </filesystem>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <tpm supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tpm-tis</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tpm-crb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>emulator</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>external</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendVersion'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>2.0</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </tpm>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <redirdev supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='bus'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </redirdev>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <channel supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pty</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>unix</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </channel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <crypto supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>qemu</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>builtin</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </crypto>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <interface supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>default</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>passt</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </interface>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <panic supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>isa</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>hyperv</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </panic>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <console supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>null</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pty</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dev</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>file</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pipe</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>stdio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>udp</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tcp</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>unix</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>qemu-vdagent</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dbus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </console>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </devices>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <gic supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <vmcoreinfo supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <genid supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <backingStoreInput supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <backup supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <async-teardown supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <ps2 supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <sev supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <sgx supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <hyperv supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='features'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>relaxed</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vapic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>spinlocks</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vpindex</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>runtime</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>synic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>stimer</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>reset</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vendor_id</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>frequencies</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>reenlightenment</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tlbflush</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>ipi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>avic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>emsr_bitmap</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>xmm_input</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <defaults>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <spinlocks>4095</spinlocks>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <stimer_direct>on</stimer_direct>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <tlbflush_direct>on</tlbflush_direct>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <tlbflush_extended>on</tlbflush_extended>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </defaults>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </hyperv>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <launchSecurity supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='sectype'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tdx</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </launchSecurity>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: </domainCapabilities>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.287 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 28 12:47:45 np0005539065 nova_compute[188377]: <domainCapabilities>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <path>/usr/libexec/qemu-kvm</path>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <domain>kvm</domain>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <arch>x86_64</arch>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <vcpu max='4096'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <iothreads supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <os supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <enum name='firmware'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>efi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <loader supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>rom</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pflash</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='readonly'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>yes</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>no</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='secure'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>yes</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>no</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </loader>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </os>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <cpu>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='host-passthrough' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='hostPassthroughMigratable'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>on</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>off</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='maximum' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='maximumMigratable'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>on</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>off</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='host-model' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <vendor>AMD</vendor>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='x2apic'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc-deadline'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='hypervisor'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc_adjust'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='spec-ctrl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='stibp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='cmp_legacy'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='overflow-recov'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='succor'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='amd-ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='virt-ssbd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='lbrv'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='tsc-scale'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='vmcb-clean'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='flushbyasid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='pause-filter'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='pfthreshold'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='svme-addr-chk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <feature policy='disable' name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <mode name='custom' supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Broadwell-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cascadelake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Cooperlake-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Denverton-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Dhyana-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Genoa'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='auto-ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Genoa-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='auto-ibrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Milan-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amd-psfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='stibp-always-on'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-Rome-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='EPYC-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='GraniteRapids-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-128'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-256'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx10-512'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='prefetchiti'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Haswell-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-noTSX'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v6'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Icelake-Server-v7'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='IvyBridge-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='KnightsMill'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512er'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512pf'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='KnightsMill-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512er'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512pf'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G4-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tbm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Opteron_G5-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fma4'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tbm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xop'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SapphireRapids-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='amx-tile'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-bf16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-fp16'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bitalg'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrc'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fzrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='la57'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='taa-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xfd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SierraForest'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cmpccxadd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='SierraForest-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ifma'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cmpccxadd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fbsdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='fsrs'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ibrs-all'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mcdt-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pbrsb-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='psdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='serialize'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vaes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Client-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='hle'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='rtm'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Skylake-Server-v5'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512bw'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512cd'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512dq'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512f'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='avx512vl'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='invpcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pcid'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='pku'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='mpx'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v2'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v3'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='core-capability'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='split-lock-detect'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='Snowridge-v4'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='cldemote'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='erms'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='gfni'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdir64b'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='movdiri'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='xsaves'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='athlon'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='athlon-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='core2duo'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='core2duo-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='coreduo'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='coreduo-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='n270'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='n270-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='ss'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='phenom'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <blockers model='phenom-v1'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnow'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <feature name='3dnowext'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </blockers>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </mode>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </cpu>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <memoryBacking supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <enum name='sourceType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>file</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>anonymous</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <value>memfd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </memoryBacking>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <devices>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <disk supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='diskDevice'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>disk</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>cdrom</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>floppy</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>lun</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='bus'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>fdc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>scsi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>sata</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-non-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </disk>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <graphics supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vnc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>egl-headless</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dbus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </graphics>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <video supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='modelType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vga</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>cirrus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>none</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>bochs</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>ramfb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </video>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <hostdev supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='mode'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>subsystem</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='startupPolicy'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>default</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>mandatory</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>requisite</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>optional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='subsysType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pci</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>scsi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='capsType'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='pciBackend'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </hostdev>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <rng supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtio-non-transitional</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>random</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>egd</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>builtin</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </rng>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <filesystem supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='driverType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>path</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>handle</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>virtiofs</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </filesystem>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <tpm supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tpm-tis</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tpm-crb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>emulator</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>external</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendVersion'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>2.0</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </tpm>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <redirdev supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='bus'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>usb</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </redirdev>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <channel supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pty</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>unix</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </channel>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <crypto supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>qemu</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendModel'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>builtin</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </crypto>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <interface supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='backendType'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>default</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>passt</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </interface>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <panic supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='model'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>isa</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>hyperv</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </panic>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <console supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='type'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>null</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vc</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pty</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dev</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>file</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>pipe</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>stdio</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>udp</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tcp</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>unix</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>qemu-vdagent</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>dbus</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </console>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </devices>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  <features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <gic supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <vmcoreinfo supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <genid supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <backingStoreInput supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <backup supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <async-teardown supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <ps2 supported='yes'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <sev supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <sgx supported='no'/>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <hyperv supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='features'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>relaxed</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vapic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>spinlocks</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vpindex</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>runtime</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>synic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>stimer</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>reset</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>vendor_id</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>frequencies</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>reenlightenment</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tlbflush</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>ipi</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>avic</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>emsr_bitmap</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>xmm_input</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <defaults>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <spinlocks>4095</spinlocks>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <stimer_direct>on</stimer_direct>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <tlbflush_direct>on</tlbflush_direct>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <tlbflush_extended>on</tlbflush_extended>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </defaults>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </hyperv>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    <launchSecurity supported='yes'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      <enum name='sectype'>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:        <value>tdx</value>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:      </enum>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:    </launchSecurity>
Nov 28 12:47:45 np0005539065 nova_compute[188377]:  </features>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: </domainCapabilities>
Nov 28 12:47:45 np0005539065 nova_compute[188377]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.347 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.347 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.348 188381 DEBUG nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.348 188381 INFO nova.virt.libvirt.host [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Secure Boot support detected#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.350 188381 INFO nova.virt.libvirt.driver [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.351 188381 INFO nova.virt.libvirt.driver [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.361 188381 DEBUG nova.virt.libvirt.driver [None req-457c5bc9-ceb0-44e0-afcb-49b49d98e2a1 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.366 188381 DEBUG oslo_concurrency.lockutils [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.366 188381 DEBUG oslo_concurrency.lockutils [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 12:47:45 np0005539065 nova_compute[188377]: 2025-11-28 17:47:45.366 188381 DEBUG oslo_concurrency.lockutils [None req-8ef9ee08-446e-46e8-89c9-f862032a2771 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 12:47:46 np0005539065 virtqemud[189019]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 28 12:47:46 np0005539065 systemd[1]: libpod-075a1f79b5b341dc66cb95f46a3b8ef1787439828f52b00ffd0a54fb4aede071.scope: Deactivated successfully.
Nov 28 12:47:46 np0005539065 virtqemud[189019]: hostname: compute-0
Nov 28 12:47:46 np0005539065 virtqemud[189019]: End of file while reading data: Input/output error
Nov 28 12:47:46 np0005539065 systemd[1]: libpod-075a1f79b5b341dc66cb95f46a3b8ef1787439828f52b00ffd0a54fb4aede071.scope: Consumed 3.347s CPU time.
Nov 28 12:47:46 np0005539065 podman[189237]: 2025-11-28 17:47:46.061443321 +0000 UTC m=+0.748366606 container died 075a1f79b5b341dc66cb95f46a3b8ef1787439828f52b00ffd0a54fb4aede071 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 28 12:47:46 np0005539065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-075a1f79b5b341dc66cb95f46a3b8ef1787439828f52b00ffd0a54fb4aede071-userdata-shm.mount: Deactivated successfully.
Nov 28 12:47:46 np0005539065 systemd[1]: var-lib-containers-storage-overlay-274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e-merged.mount: Deactivated successfully.
Nov 28 12:47:46 np0005539065 podman[189237]: 2025-11-28 17:47:46.124603227 +0000 UTC m=+0.811526512 container cleanup 075a1f79b5b341dc66cb95f46a3b8ef1787439828f52b00ffd0a54fb4aede071 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3)
Nov 28 12:47:46 np0005539065 podman[189237]: nova_compute
Nov 28 12:47:46 np0005539065 podman[189268]: nova_compute
Nov 28 12:47:46 np0005539065 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 28 12:47:46 np0005539065 systemd[1]: Stopped nova_compute container.
Nov 28 12:47:46 np0005539065 systemd[1]: Starting nova_compute container...
Nov 28 12:47:46 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:47:46 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:46 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:46 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:46 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:46 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ad283ac75b771bd842f05ad679e7f0b7c426c277f4b186e2da4dbeca20d6e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:46 np0005539065 podman[189281]: 2025-11-28 17:47:46.390506538 +0000 UTC m=+0.141267690 container init 075a1f79b5b341dc66cb95f46a3b8ef1787439828f52b00ffd0a54fb4aede071 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3)
Nov 28 12:47:46 np0005539065 podman[189281]: 2025-11-28 17:47:46.398154015 +0000 UTC m=+0.148915147 container start 075a1f79b5b341dc66cb95f46a3b8ef1787439828f52b00ffd0a54fb4aede071 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 28 12:47:46 np0005539065 podman[189281]: nova_compute
Nov 28 12:47:46 np0005539065 nova_compute[189296]: + sudo -E kolla_set_configs
Nov 28 12:47:46 np0005539065 systemd[1]: Started nova_compute container.
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Validating config file
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Copying service configuration files
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Deleting /etc/ceph
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Creating directory /etc/ceph
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /etc/ceph
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Writing out command to execute
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 28 12:47:46 np0005539065 nova_compute[189296]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 28 12:47:46 np0005539065 nova_compute[189296]: ++ cat /run_command
Nov 28 12:47:46 np0005539065 nova_compute[189296]: + CMD=nova-compute
Nov 28 12:47:46 np0005539065 nova_compute[189296]: + ARGS=
Nov 28 12:47:46 np0005539065 nova_compute[189296]: + sudo kolla_copy_cacerts
Nov 28 12:47:46 np0005539065 nova_compute[189296]: + [[ ! -n '' ]]
Nov 28 12:47:46 np0005539065 nova_compute[189296]: + . kolla_extend_start
Nov 28 12:47:46 np0005539065 nova_compute[189296]: + echo 'Running command: '\''nova-compute'\'''
Nov 28 12:47:46 np0005539065 nova_compute[189296]: Running command: 'nova-compute'
Nov 28 12:47:46 np0005539065 nova_compute[189296]: + umask 0022
Nov 28 12:47:46 np0005539065 nova_compute[189296]: + exec nova-compute
Nov 28 12:47:47 np0005539065 python3.9[189459]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 28 12:47:47 np0005539065 systemd[1]: Started libpod-conmon-967d8e7c2c42bb06c716e4a93e5b8fe00f3b6de97c9a38b1e1bdeed06ab6ea27.scope.
Nov 28 12:47:47 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:47:47 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b3e878e72a93974ad47be704810c82d7acc70c3b9368b7b5b569cf81c07782/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:47 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b3e878e72a93974ad47be704810c82d7acc70c3b9368b7b5b569cf81c07782/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:47 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b3e878e72a93974ad47be704810c82d7acc70c3b9368b7b5b569cf81c07782/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 28 12:47:47 np0005539065 podman[189486]: 2025-11-28 17:47:47.554357156 +0000 UTC m=+0.141082736 container init 967d8e7c2c42bb06c716e4a93e5b8fe00f3b6de97c9a38b1e1bdeed06ab6ea27 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 28 12:47:47 np0005539065 podman[189486]: 2025-11-28 17:47:47.563966431 +0000 UTC m=+0.150691981 container start 967d8e7c2c42bb06c716e4a93e5b8fe00f3b6de97c9a38b1e1bdeed06ab6ea27 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 28 12:47:47 np0005539065 python3.9[189459]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Applying nova statedir ownership
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 28 12:47:47 np0005539065 nova_compute_init[189507]: INFO:nova_statedir:Nova statedir ownership complete
Nov 28 12:47:47 np0005539065 systemd[1]: libpod-967d8e7c2c42bb06c716e4a93e5b8fe00f3b6de97c9a38b1e1bdeed06ab6ea27.scope: Deactivated successfully.
Nov 28 12:47:47 np0005539065 podman[189519]: 2025-11-28 17:47:47.660259629 +0000 UTC m=+0.024788158 container died 967d8e7c2c42bb06c716e4a93e5b8fe00f3b6de97c9a38b1e1bdeed06ab6ea27 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 28 12:47:47 np0005539065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-967d8e7c2c42bb06c716e4a93e5b8fe00f3b6de97c9a38b1e1bdeed06ab6ea27-userdata-shm.mount: Deactivated successfully.
Nov 28 12:47:47 np0005539065 systemd[1]: var-lib-containers-storage-overlay-a1b3e878e72a93974ad47be704810c82d7acc70c3b9368b7b5b569cf81c07782-merged.mount: Deactivated successfully.
Nov 28 12:47:47 np0005539065 podman[189519]: 2025-11-28 17:47:47.700455843 +0000 UTC m=+0.064984352 container cleanup 967d8e7c2c42bb06c716e4a93e5b8fe00f3b6de97c9a38b1e1bdeed06ab6ea27 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 28 12:47:47 np0005539065 systemd[1]: libpod-conmon-967d8e7c2c42bb06c716e4a93e5b8fe00f3b6de97c9a38b1e1bdeed06ab6ea27.scope: Deactivated successfully.
Nov 28 12:47:48 np0005539065 systemd[1]: session-24.scope: Deactivated successfully.
Nov 28 12:47:48 np0005539065 systemd[1]: session-24.scope: Consumed 1min 51.334s CPU time.
Nov 28 12:47:48 np0005539065 systemd-logind[790]: Session 24 logged out. Waiting for processes to exit.
Nov 28 12:47:48 np0005539065 systemd-logind[790]: Removed session 24.
Nov 28 12:47:48 np0005539065 nova_compute[189296]: 2025-11-28 17:47:48.450 189300 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 28 12:47:48 np0005539065 nova_compute[189296]: 2025-11-28 17:47:48.450 189300 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 28 12:47:48 np0005539065 nova_compute[189296]: 2025-11-28 17:47:48.450 189300 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 28 12:47:48 np0005539065 nova_compute[189296]: 2025-11-28 17:47:48.450 189300 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 28 12:47:48 np0005539065 nova_compute[189296]: 2025-11-28 17:47:48.593 189300 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 12:47:48 np0005539065 nova_compute[189296]: 2025-11-28 17:47:48.615 189300 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 12:47:48 np0005539065 nova_compute[189296]: 2025-11-28 17:47:48.615 189300 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.133 189300 INFO nova.virt.driver [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.226 189300 INFO nova.compute.provider_config [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.241 189300 DEBUG oslo_concurrency.lockutils [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.242 189300 DEBUG oslo_concurrency.lockutils [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.242 189300 DEBUG oslo_concurrency.lockutils [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.242 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.242 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.243 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.243 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.243 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.243 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.243 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.243 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.244 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.244 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.244 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.244 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.244 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.245 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.245 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.245 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.245 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.245 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.246 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.246 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.246 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.246 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.246 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.247 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.247 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.247 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.247 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.248 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.248 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.248 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.248 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.248 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.249 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.249 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.249 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.249 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.249 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.250 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.250 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.250 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.250 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.250 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.251 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.251 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.251 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.251 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.251 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.251 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.251 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.252 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.252 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.252 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.252 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.252 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.253 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.253 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.253 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.253 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.253 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.254 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.254 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.254 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.254 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.254 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.254 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.255 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.255 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.255 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.255 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.255 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.255 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.256 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.256 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.256 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.256 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.257 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.257 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.257 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.257 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.257 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.258 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.258 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.258 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.258 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.258 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.258 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.258 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.259 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.259 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.259 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.259 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.259 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.259 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.259 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.260 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.260 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.260 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.260 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.260 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.260 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.261 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.261 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.261 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.261 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.261 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.261 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.262 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.262 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.262 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.262 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.262 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.262 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.262 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.263 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.263 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.263 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.263 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.263 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.263 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.263 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.263 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.264 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.264 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.264 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.264 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.264 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.264 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.265 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.265 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.265 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.265 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.265 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.265 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.266 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.266 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.266 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.266 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.266 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.266 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.266 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.266 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.267 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.267 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.267 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.267 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.267 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.267 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.268 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.268 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.268 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.268 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.268 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.268 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.269 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.269 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.269 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.269 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.269 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.269 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.269 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.270 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.270 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.270 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.270 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.270 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.270 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.270 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.271 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.271 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.271 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.271 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.271 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.271 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.271 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.272 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.272 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.272 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.272 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.272 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.272 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.272 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.273 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.273 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.273 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.273 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.273 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.273 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.274 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.274 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.274 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.274 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.274 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.274 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.274 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.275 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.275 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.275 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.275 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.275 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.275 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.275 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.276 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.276 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.276 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.276 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.276 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.276 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.276 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.277 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.277 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.277 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.277 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.277 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.277 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.278 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.278 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.278 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.278 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.278 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.278 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.278 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.279 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.279 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.279 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.279 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.279 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.279 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.279 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.280 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.280 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.280 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.280 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.280 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.280 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.281 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.281 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.281 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.281 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.281 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.281 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.282 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.282 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.282 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.282 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.282 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.282 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.282 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.283 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.283 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.283 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.283 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.283 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.283 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.283 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.284 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.284 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.284 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.284 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.284 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.284 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.284 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.285 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.285 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.285 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.285 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.285 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.285 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.286 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.286 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.286 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.286 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.286 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.286 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.287 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.287 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.287 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.287 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.287 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.287 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.288 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.288 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.288 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.288 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.288 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.288 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.289 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.289 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.289 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.289 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.289 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.289 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.289 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.289 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.290 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.290 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.290 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.290 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.290 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.290 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.290 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.291 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.291 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.291 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.291 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.291 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.291 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.292 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.292 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.292 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.292 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.292 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.292 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.293 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.293 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.293 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.293 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.293 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.293 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.293 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.294 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.294 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.294 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.294 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.294 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.294 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.294 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.295 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.295 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.295 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.295 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.295 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.295 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.296 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.296 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.296 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.296 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.296 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.296 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.296 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.297 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.297 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.297 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.297 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.297 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.297 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.297 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.298 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.298 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.298 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.298 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.298 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.298 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.298 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.299 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.299 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.299 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.299 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.299 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.299 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.300 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.300 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.300 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.300 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.300 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.300 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.301 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.301 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.301 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.301 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.301 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.301 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.301 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.302 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.302 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.302 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.302 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.302 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.302 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.302 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.302 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.303 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.303 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.303 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.303 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.303 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.303 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.303 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.304 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.304 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.304 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.304 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.304 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.304 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.304 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.305 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.305 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.305 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.305 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.305 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.305 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.305 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.306 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.306 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.306 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.306 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.306 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.306 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.306 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.306 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.307 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.307 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.307 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.307 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.307 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.307 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.307 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.308 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.308 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.308 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.308 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.308 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.308 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.309 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.309 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.309 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.309 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.309 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.309 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.309 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.310 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.310 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.310 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.310 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.310 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.310 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.310 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.311 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.311 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.311 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.311 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.311 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.311 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.312 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.312 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.312 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.312 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.312 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.312 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.312 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.313 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.313 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.313 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.313 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.313 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.314 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.314 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.314 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.314 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.314 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.314 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.314 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.315 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.315 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.315 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.315 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.315 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.315 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.315 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.315 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.316 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.316 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.316 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.316 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.316 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.316 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.316 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.317 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.317 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.317 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.317 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.317 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.317 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.317 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.318 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.318 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.318 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.318 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.318 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.318 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.318 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.319 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.319 189300 WARNING oslo_config.cfg [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 28 12:47:49 np0005539065 nova_compute[189296]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 28 12:47:49 np0005539065 nova_compute[189296]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 28 12:47:49 np0005539065 nova_compute[189296]: and ``live_migration_inbound_addr`` respectively.
Nov 28 12:47:49 np0005539065 nova_compute[189296]: ).  Its value may be silently ignored in the future.#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.319 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.319 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.319 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.319 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.320 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.320 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.320 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.320 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.320 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.320 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.320 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.321 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.321 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.321 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.321 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.321 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.321 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.322 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.322 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.322 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.322 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.322 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.322 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.322 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.323 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.323 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.323 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.323 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.323 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.323 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.324 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.324 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.324 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.324 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.324 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.324 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.324 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.325 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.325 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.325 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.325 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.325 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.325 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.325 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.326 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.326 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.326 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.326 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.326 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.326 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.326 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.327 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.327 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.327 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.327 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.327 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.327 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.327 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.328 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.328 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.328 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.328 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.328 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.328 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.328 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.328 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.329 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.329 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.329 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.329 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.329 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.329 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.329 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.330 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.330 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.330 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.330 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.330 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.330 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.330 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.331 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.331 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.331 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.331 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.331 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.331 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.331 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.332 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.332 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.332 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.332 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.332 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.332 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.332 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.332 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.333 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.333 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.333 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.333 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.333 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.333 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.333 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.334 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.334 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.334 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.334 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.334 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.334 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.334 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.334 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.335 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.335 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.335 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.335 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.335 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.335 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.335 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.336 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.336 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.336 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.336 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.336 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.336 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.336 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.336 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.337 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.337 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.337 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.337 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.337 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.337 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.338 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.338 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.338 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.338 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.338 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.338 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.338 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.339 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.339 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.339 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.339 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.339 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.339 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.340 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.340 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.340 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.340 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.340 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.340 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.340 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.341 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.341 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.341 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.341 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.341 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.341 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.341 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.342 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.342 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.342 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.342 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.342 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.342 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.342 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.342 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.343 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.343 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.343 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.343 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.343 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.343 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.343 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.343 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.344 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.344 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.344 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.344 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.344 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.344 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.345 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.345 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.345 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.345 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.346 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.346 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.346 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.347 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.347 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.347 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.347 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.347 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.347 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.347 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.348 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.348 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.348 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.348 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.348 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.349 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.349 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.349 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.349 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.349 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.350 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.350 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.350 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.350 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.350 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.351 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.351 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.351 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.351 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.351 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.351 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.351 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.352 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.352 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.352 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.352 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.352 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.352 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.352 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.353 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.353 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.353 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.353 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.353 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.353 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.353 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.354 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.354 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.354 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.354 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.354 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.354 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.355 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.355 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.355 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.355 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.355 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.355 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.355 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.356 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.356 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.356 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.356 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.356 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.356 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.357 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.357 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.357 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.357 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.357 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.358 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.358 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.358 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.358 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.358 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.358 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.358 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.359 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.359 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.359 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.359 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.359 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.359 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.359 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.360 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.360 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.360 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.360 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.360 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.360 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.361 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.361 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.361 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.361 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.361 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.361 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.362 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.362 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.362 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.362 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.362 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.362 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.362 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.362 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.363 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.363 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.363 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.363 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.363 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.363 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.364 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.364 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.365 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.365 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.365 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.365 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.365 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.365 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.366 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.366 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.366 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.366 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.366 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.367 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.367 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.367 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.367 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.367 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.367 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.368 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.368 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.368 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.368 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.368 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.369 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.369 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.369 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.369 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.369 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.369 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.370 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.370 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.370 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.370 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.370 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.370 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.370 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.371 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.371 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.371 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.371 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.371 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.371 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.371 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.372 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.372 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.372 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.372 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.372 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.372 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.372 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.373 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.373 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.373 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.373 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.373 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.373 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.373 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.374 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.374 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.374 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.374 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.374 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.374 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.374 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.374 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.375 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.375 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.375 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.375 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.375 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.375 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.376 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.376 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.376 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.376 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.376 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.376 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.376 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.377 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.377 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.377 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.377 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.377 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.378 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.378 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.378 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.378 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.378 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.378 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.378 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.379 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.379 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.379 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.379 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.379 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.379 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.379 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.380 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.380 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.380 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.380 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.380 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.380 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.380 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.381 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.381 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.381 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.381 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.381 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.381 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.381 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.382 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.382 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.382 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.382 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.382 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.382 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.382 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.382 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.383 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.383 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.383 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.383 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.383 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.383 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.383 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.384 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.384 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.384 189300 DEBUG oslo_service.service [None req-03957a75-b0ef-470f-8072-366169249523 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.385 189300 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.400 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.401 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.401 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.401 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.414 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f9385941130> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.418 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f9385941130> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.419 189300 INFO nova.virt.libvirt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.425 189300 INFO nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Libvirt host capabilities <capabilities>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <host>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <uuid>23602de7-dd9c-46ae-9cba-a45f7911b9d9</uuid>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <cpu>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <arch>x86_64</arch>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model>EPYC-Rome-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <vendor>AMD</vendor>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <microcode version='16777317'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <signature family='23' model='49' stepping='0'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='x2apic'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='tsc-deadline'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='osxsave'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='hypervisor'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='tsc_adjust'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='spec-ctrl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='stibp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='arch-capabilities'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='cmp_legacy'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='topoext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='virt-ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='lbrv'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='tsc-scale'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='vmcb-clean'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='pause-filter'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='pfthreshold'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='svme-addr-chk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='rdctl-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='skip-l1dfl-vmentry'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='mds-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature name='pschange-mc-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <pages unit='KiB' size='4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <pages unit='KiB' size='2048'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <pages unit='KiB' size='1048576'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </cpu>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <power_management>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <suspend_mem/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <suspend_disk/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <suspend_hybrid/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </power_management>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <iommu support='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <migration_features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <live/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <uri_transports>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <uri_transport>tcp</uri_transport>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <uri_transport>rdma</uri_transport>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </uri_transports>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </migration_features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <topology>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <cells num='1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <cell id='0'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:          <memory unit='KiB'>7864324</memory>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:          <pages unit='KiB' size='4'>1966081</pages>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:          <pages unit='KiB' size='2048'>0</pages>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:          <distances>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:            <sibling id='0' value='10'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:          </distances>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:          <cpus num='8'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:          </cpus>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        </cell>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </cells>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </topology>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <cache>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </cache>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <secmodel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model>selinux</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <doi>0</doi>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </secmodel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <secmodel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model>dac</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <doi>0</doi>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </secmodel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </host>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <guest>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <os_type>hvm</os_type>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <arch name='i686'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <wordsize>32</wordsize>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <domain type='qemu'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <domain type='kvm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </arch>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <pae/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <nonpae/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <acpi default='on' toggle='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <apic default='on' toggle='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <cpuselection/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <deviceboot/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <disksnapshot default='on' toggle='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <externalSnapshot/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </guest>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <guest>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <os_type>hvm</os_type>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <arch name='x86_64'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <wordsize>64</wordsize>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <domain type='qemu'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <domain type='kvm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </arch>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <acpi default='on' toggle='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <apic default='on' toggle='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <cpuselection/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <deviceboot/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <disksnapshot default='on' toggle='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <externalSnapshot/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </guest>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 
Nov 28 12:47:49 np0005539065 nova_compute[189296]: </capabilities>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: #033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.432 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.433 189300 WARNING nova.virt.libvirt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.433 189300 DEBUG nova.virt.libvirt.volume.mount [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.437 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 28 12:47:49 np0005539065 nova_compute[189296]: <domainCapabilities>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <path>/usr/libexec/qemu-kvm</path>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <domain>kvm</domain>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <arch>i686</arch>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <vcpu max='240'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <iothreads supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <os supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <enum name='firmware'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <loader supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>rom</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pflash</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='readonly'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>yes</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>no</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='secure'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>no</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </loader>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </os>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <cpu>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='host-passthrough' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='hostPassthroughMigratable'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>on</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>off</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='maximum' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='maximumMigratable'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>on</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>off</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='host-model' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <vendor>AMD</vendor>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='x2apic'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc-deadline'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='hypervisor'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc_adjust'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='spec-ctrl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='stibp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='cmp_legacy'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='overflow-recov'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='succor'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='amd-ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='virt-ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='lbrv'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc-scale'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='vmcb-clean'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='flushbyasid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='pause-filter'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='pfthreshold'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='svme-addr-chk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='disable' name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='custom' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Dhyana-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Genoa'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='auto-ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Genoa-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='auto-ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-128'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-256'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-512'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v6'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v7'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='KnightsMill'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512er'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512pf'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='KnightsMill-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512er'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512pf'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G4-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tbm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G5-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tbm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SierraForest'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cmpccxadd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SierraForest-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cmpccxadd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='athlon'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='athlon-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='core2duo'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='core2duo-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='coreduo'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='coreduo-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='n270'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='n270-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='phenom'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='phenom-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </cpu>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <memoryBacking supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <enum name='sourceType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>file</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>anonymous</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>memfd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </memoryBacking>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <devices>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <disk supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='diskDevice'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>disk</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>cdrom</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>floppy</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>lun</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='bus'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>ide</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>fdc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>scsi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>sata</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-non-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </disk>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <graphics supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vnc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>egl-headless</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dbus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </graphics>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <video supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='modelType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vga</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>cirrus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>none</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>bochs</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>ramfb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </video>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <hostdev supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='mode'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>subsystem</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='startupPolicy'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>default</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>mandatory</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>requisite</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>optional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='subsysType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pci</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>scsi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='capsType'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='pciBackend'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </hostdev>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <rng supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-non-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>random</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>egd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>builtin</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </rng>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <filesystem supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='driverType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>path</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>handle</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtiofs</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </filesystem>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <tpm supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tpm-tis</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tpm-crb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>emulator</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>external</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendVersion'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>2.0</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </tpm>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <redirdev supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='bus'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </redirdev>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <channel supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pty</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>unix</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </channel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <crypto supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>qemu</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>builtin</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </crypto>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <interface supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>default</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>passt</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </interface>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <panic supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>isa</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>hyperv</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </panic>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <console supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>null</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pty</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dev</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>file</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pipe</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>stdio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>udp</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tcp</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>unix</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>qemu-vdagent</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dbus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </console>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </devices>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <gic supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <vmcoreinfo supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <genid supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <backingStoreInput supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <backup supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <async-teardown supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <ps2 supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <sev supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <sgx supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <hyperv supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='features'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>relaxed</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vapic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>spinlocks</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vpindex</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>runtime</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>synic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>stimer</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>reset</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vendor_id</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>frequencies</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>reenlightenment</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tlbflush</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>ipi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>avic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>emsr_bitmap</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>xmm_input</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <defaults>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <spinlocks>4095</spinlocks>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <stimer_direct>on</stimer_direct>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <tlbflush_direct>on</tlbflush_direct>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <tlbflush_extended>on</tlbflush_extended>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </defaults>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </hyperv>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <launchSecurity supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='sectype'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tdx</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </launchSecurity>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: </domainCapabilities>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.443 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 28 12:47:49 np0005539065 nova_compute[189296]: <domainCapabilities>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <path>/usr/libexec/qemu-kvm</path>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <domain>kvm</domain>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <arch>i686</arch>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <vcpu max='4096'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <iothreads supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <os supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <enum name='firmware'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <loader supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>rom</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pflash</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='readonly'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>yes</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>no</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='secure'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>no</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </loader>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </os>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <cpu>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='host-passthrough' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='hostPassthroughMigratable'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>on</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>off</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='maximum' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='maximumMigratable'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>on</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>off</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='host-model' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <vendor>AMD</vendor>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='x2apic'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc-deadline'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='hypervisor'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc_adjust'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='spec-ctrl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='stibp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='cmp_legacy'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='overflow-recov'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='succor'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='amd-ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='virt-ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='lbrv'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc-scale'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='vmcb-clean'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='flushbyasid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='pause-filter'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='pfthreshold'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='svme-addr-chk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='disable' name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='custom' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Dhyana-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Genoa'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='auto-ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Genoa-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='auto-ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-128'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-256'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-512'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v6'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v7'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='KnightsMill'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512er'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512pf'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='KnightsMill-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512er'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512pf'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G4-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tbm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G5-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tbm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SierraForest'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cmpccxadd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SierraForest-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cmpccxadd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='athlon'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='athlon-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='core2duo'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='core2duo-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='coreduo'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='coreduo-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='n270'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='n270-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='phenom'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='phenom-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </cpu>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <memoryBacking supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <enum name='sourceType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>file</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>anonymous</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>memfd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </memoryBacking>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <devices>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <disk supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='diskDevice'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>disk</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>cdrom</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>floppy</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>lun</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='bus'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>fdc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>scsi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>sata</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-non-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </disk>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <graphics supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vnc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>egl-headless</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dbus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </graphics>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <video supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='modelType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vga</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>cirrus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>none</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>bochs</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>ramfb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </video>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <hostdev supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='mode'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>subsystem</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='startupPolicy'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>default</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>mandatory</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>requisite</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>optional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='subsysType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pci</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>scsi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='capsType'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='pciBackend'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </hostdev>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <rng supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-non-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>random</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>egd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>builtin</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </rng>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <filesystem supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='driverType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>path</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>handle</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtiofs</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </filesystem>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <tpm supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tpm-tis</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tpm-crb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>emulator</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>external</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendVersion'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>2.0</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </tpm>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <redirdev supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='bus'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </redirdev>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <channel supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pty</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>unix</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </channel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <crypto supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>qemu</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>builtin</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </crypto>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <interface supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>default</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>passt</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </interface>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <panic supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>isa</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>hyperv</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </panic>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <console supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>null</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pty</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dev</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>file</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pipe</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>stdio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>udp</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tcp</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>unix</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>qemu-vdagent</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dbus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </console>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </devices>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <gic supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <vmcoreinfo supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <genid supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <backingStoreInput supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <backup supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <async-teardown supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <ps2 supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <sev supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <sgx supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <hyperv supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='features'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>relaxed</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vapic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>spinlocks</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vpindex</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>runtime</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>synic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>stimer</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>reset</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vendor_id</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>frequencies</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>reenlightenment</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tlbflush</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>ipi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>avic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>emsr_bitmap</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>xmm_input</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <defaults>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <spinlocks>4095</spinlocks>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <stimer_direct>on</stimer_direct>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <tlbflush_direct>on</tlbflush_direct>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <tlbflush_extended>on</tlbflush_extended>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </defaults>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </hyperv>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <launchSecurity supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='sectype'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tdx</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </launchSecurity>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: </domainCapabilities>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.473 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.476 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 28 12:47:49 np0005539065 nova_compute[189296]: <domainCapabilities>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <path>/usr/libexec/qemu-kvm</path>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <domain>kvm</domain>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <arch>x86_64</arch>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <vcpu max='240'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <iothreads supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <os supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <enum name='firmware'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <loader supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>rom</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pflash</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='readonly'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>yes</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>no</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='secure'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>no</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </loader>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </os>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <cpu>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='host-passthrough' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='hostPassthroughMigratable'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>on</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>off</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='maximum' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='maximumMigratable'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>on</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>off</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='host-model' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <vendor>AMD</vendor>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='x2apic'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc-deadline'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='hypervisor'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc_adjust'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='spec-ctrl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='stibp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='cmp_legacy'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='overflow-recov'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='succor'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='amd-ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='virt-ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='lbrv'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc-scale'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='vmcb-clean'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='flushbyasid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='pause-filter'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='pfthreshold'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='svme-addr-chk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='disable' name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='custom' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Dhyana-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Genoa'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='auto-ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Genoa-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='auto-ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-128'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-256'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-512'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v6'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v7'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='KnightsMill'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512er'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512pf'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='KnightsMill-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512er'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512pf'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G4-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tbm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G5-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tbm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SierraForest'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cmpccxadd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SierraForest-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cmpccxadd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='athlon'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='athlon-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='core2duo'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='core2duo-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='coreduo'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='coreduo-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='n270'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='n270-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='phenom'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='phenom-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </cpu>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <memoryBacking supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <enum name='sourceType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>file</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>anonymous</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>memfd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </memoryBacking>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <devices>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <disk supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='diskDevice'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>disk</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>cdrom</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>floppy</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>lun</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='bus'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>ide</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>fdc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>scsi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>sata</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-non-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </disk>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <graphics supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vnc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>egl-headless</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dbus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </graphics>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <video supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='modelType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vga</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>cirrus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>none</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>bochs</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>ramfb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </video>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <hostdev supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='mode'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>subsystem</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='startupPolicy'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>default</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>mandatory</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>requisite</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>optional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='subsysType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pci</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>scsi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='capsType'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='pciBackend'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </hostdev>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <rng supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-non-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>random</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>egd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>builtin</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </rng>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <filesystem supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='driverType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>path</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>handle</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtiofs</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </filesystem>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <tpm supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tpm-tis</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tpm-crb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>emulator</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>external</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendVersion'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>2.0</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </tpm>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <redirdev supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='bus'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </redirdev>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <channel supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pty</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>unix</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </channel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <crypto supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>qemu</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>builtin</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </crypto>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <interface supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>default</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>passt</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </interface>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <panic supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>isa</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>hyperv</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </panic>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <console supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>null</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pty</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dev</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>file</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pipe</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>stdio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>udp</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tcp</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>unix</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>qemu-vdagent</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dbus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </console>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </devices>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <gic supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <vmcoreinfo supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <genid supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <backingStoreInput supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <backup supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <async-teardown supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <ps2 supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <sev supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <sgx supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <hyperv supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='features'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>relaxed</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vapic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>spinlocks</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vpindex</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>runtime</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>synic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>stimer</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>reset</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vendor_id</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>frequencies</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>reenlightenment</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tlbflush</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>ipi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>avic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>emsr_bitmap</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>xmm_input</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <defaults>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <spinlocks>4095</spinlocks>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <stimer_direct>on</stimer_direct>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <tlbflush_direct>on</tlbflush_direct>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <tlbflush_extended>on</tlbflush_extended>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </defaults>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </hyperv>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <launchSecurity supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='sectype'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tdx</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </launchSecurity>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: </domainCapabilities>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.542 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 28 12:47:49 np0005539065 nova_compute[189296]: <domainCapabilities>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <path>/usr/libexec/qemu-kvm</path>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <domain>kvm</domain>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <arch>x86_64</arch>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <vcpu max='4096'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <iothreads supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <os supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <enum name='firmware'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>efi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <loader supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>rom</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pflash</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='readonly'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>yes</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>no</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='secure'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>yes</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>no</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </loader>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </os>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <cpu>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='host-passthrough' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='hostPassthroughMigratable'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>on</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>off</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='maximum' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='maximumMigratable'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>on</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>off</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='host-model' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <vendor>AMD</vendor>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='x2apic'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc-deadline'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='hypervisor'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc_adjust'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='spec-ctrl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='stibp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='cmp_legacy'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='overflow-recov'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='succor'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='amd-ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='virt-ssbd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='lbrv'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='tsc-scale'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='vmcb-clean'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='flushbyasid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='pause-filter'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='pfthreshold'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='svme-addr-chk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <feature policy='disable' name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <mode name='custom' supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Broadwell-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cascadelake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Cooperlake-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Denverton-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Dhyana-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Genoa'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='auto-ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Genoa-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='auto-ibrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Milan-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amd-psfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='no-nested-data-bp'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='null-sel-clr-base'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='stibp-always-on'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-Rome-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='EPYC-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='GraniteRapids-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-128'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-256'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx10-512'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='prefetchiti'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Haswell-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-noTSX'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v6'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Icelake-Server-v7'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='IvyBridge-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='KnightsMill'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512er'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512pf'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='KnightsMill-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4fmaps'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-4vnniw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512er'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512pf'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G4-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tbm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Opteron_G5-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fma4'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tbm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xop'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SapphireRapids-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='amx-tile'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-bf16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-fp16'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512-vpopcntdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bitalg'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vbmi2'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrc'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fzrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='la57'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='taa-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='tsx-ldtrk'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xfd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SierraForest'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cmpccxadd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='SierraForest-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ifma'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-ne-convert'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx-vnni-int8'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='bus-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cmpccxadd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fbsdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='fsrs'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ibrs-all'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mcdt-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pbrsb-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='psdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='sbdr-ssdp-no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='serialize'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vaes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='vpclmulqdq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Client-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='hle'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='rtm'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Skylake-Server-v5'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512bw'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512cd'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512dq'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512f'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='avx512vl'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='invpcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pcid'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='pku'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='mpx'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v2'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v3'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='core-capability'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='split-lock-detect'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='Snowridge-v4'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='cldemote'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='erms'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='gfni'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdir64b'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='movdiri'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='xsaves'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='athlon'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='athlon-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='core2duo'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='core2duo-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='coreduo'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='coreduo-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='n270'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='n270-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='ss'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='phenom'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <blockers model='phenom-v1'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnow'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <feature name='3dnowext'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </blockers>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </mode>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </cpu>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <memoryBacking supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <enum name='sourceType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>file</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>anonymous</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <value>memfd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </memoryBacking>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <devices>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <disk supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='diskDevice'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>disk</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>cdrom</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>floppy</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>lun</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='bus'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>fdc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>scsi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>sata</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-non-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </disk>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <graphics supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vnc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>egl-headless</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dbus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </graphics>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <video supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='modelType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vga</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>cirrus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>none</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>bochs</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>ramfb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </video>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <hostdev supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='mode'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>subsystem</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='startupPolicy'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>default</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>mandatory</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>requisite</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>optional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='subsysType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pci</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>scsi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='capsType'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='pciBackend'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </hostdev>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <rng supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtio-non-transitional</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>random</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>egd</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>builtin</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </rng>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <filesystem supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='driverType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>path</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>handle</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>virtiofs</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </filesystem>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <tpm supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tpm-tis</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tpm-crb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>emulator</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>external</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendVersion'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>2.0</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </tpm>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <redirdev supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='bus'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>usb</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </redirdev>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <channel supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pty</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>unix</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </channel>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <crypto supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>qemu</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendModel'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>builtin</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </crypto>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <interface supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='backendType'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>default</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>passt</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </interface>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <panic supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='model'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>isa</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>hyperv</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </panic>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <console supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='type'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>null</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vc</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pty</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dev</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>file</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>pipe</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>stdio</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>udp</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tcp</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>unix</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>qemu-vdagent</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>dbus</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </console>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </devices>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  <features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <gic supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <vmcoreinfo supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <genid supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <backingStoreInput supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <backup supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <async-teardown supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <ps2 supported='yes'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <sev supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <sgx supported='no'/>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <hyperv supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='features'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>relaxed</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vapic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>spinlocks</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vpindex</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>runtime</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>synic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>stimer</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>reset</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>vendor_id</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>frequencies</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>reenlightenment</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tlbflush</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>ipi</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>avic</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>emsr_bitmap</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>xmm_input</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <defaults>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <spinlocks>4095</spinlocks>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <stimer_direct>on</stimer_direct>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <tlbflush_direct>on</tlbflush_direct>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <tlbflush_extended>on</tlbflush_extended>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </defaults>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </hyperv>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    <launchSecurity supported='yes'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      <enum name='sectype'>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:        <value>tdx</value>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:      </enum>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:    </launchSecurity>
Nov 28 12:47:49 np0005539065 nova_compute[189296]:  </features>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: </domainCapabilities>
Nov 28 12:47:49 np0005539065 nova_compute[189296]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.608 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.609 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.609 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.609 189300 INFO nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Secure Boot support detected#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.612 189300 INFO nova.virt.libvirt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.612 189300 INFO nova.virt.libvirt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.621 189300 DEBUG nova.virt.libvirt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.664 189300 INFO nova.virt.node [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Determined node identity d10a9930-4504-4222-97f7-6727a5a2d43b from /var/lib/nova/compute_id#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.685 189300 WARNING nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Compute nodes ['d10a9930-4504-4222-97f7-6727a5a2d43b'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.727 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.767 189300 WARNING nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.767 189300 DEBUG oslo_concurrency.lockutils [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.767 189300 DEBUG oslo_concurrency.lockutils [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.767 189300 DEBUG oslo_concurrency.lockutils [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:47:49 np0005539065 nova_compute[189296]: 2025-11-28 17:47:49.768 189300 DEBUG nova.compute.resource_tracker [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 12:47:49 np0005539065 systemd[1]: Starting libvirt nodedev daemon...
Nov 28 12:47:49 np0005539065 systemd[1]: Started libvirt nodedev daemon.
Nov 28 12:47:50 np0005539065 nova_compute[189296]: 2025-11-28 17:47:50.023 189300 WARNING nova.virt.libvirt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 12:47:50 np0005539065 nova_compute[189296]: 2025-11-28 17:47:50.024 189300 DEBUG nova.compute.resource_tracker [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6058MB free_disk=72.61060333251953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 12:47:50 np0005539065 nova_compute[189296]: 2025-11-28 17:47:50.024 189300 DEBUG oslo_concurrency.lockutils [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:47:50 np0005539065 nova_compute[189296]: 2025-11-28 17:47:50.025 189300 DEBUG oslo_concurrency.lockutils [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:47:50 np0005539065 nova_compute[189296]: 2025-11-28 17:47:50.040 189300 WARNING nova.compute.resource_tracker [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] No compute node record for compute-0.ctlplane.example.com:d10a9930-4504-4222-97f7-6727a5a2d43b: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host d10a9930-4504-4222-97f7-6727a5a2d43b could not be found.#033[00m
Nov 28 12:47:50 np0005539065 nova_compute[189296]: 2025-11-28 17:47:50.063 189300 INFO nova.compute.resource_tracker [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: d10a9930-4504-4222-97f7-6727a5a2d43b#033[00m
Nov 28 12:47:50 np0005539065 nova_compute[189296]: 2025-11-28 17:47:50.132 189300 DEBUG nova.compute.resource_tracker [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 12:47:50 np0005539065 nova_compute[189296]: 2025-11-28 17:47:50.133 189300 DEBUG nova.compute.resource_tracker [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 12:47:51 np0005539065 podman[189619]: 2025-11-28 17:47:51.03287997 +0000 UTC m=+0.085438323 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.123 189300 INFO nova.scheduler.client.report [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [req-a3f74a2d-54c6-4051-a828-55db2f331fdf] Created resource provider record via placement API for resource provider with UUID d10a9930-4504-4222-97f7-6727a5a2d43b and name compute-0.ctlplane.example.com.#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.679 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 28 12:47:51 np0005539065 nova_compute[189296]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.679 189300 INFO nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.680 189300 DEBUG nova.compute.provider_tree [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.680 189300 DEBUG nova.virt.libvirt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.748 189300 DEBUG nova.scheduler.client.report [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Updated inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.748 189300 DEBUG nova.compute.provider_tree [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Updating resource provider d10a9930-4504-4222-97f7-6727a5a2d43b generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.749 189300 DEBUG nova.compute.provider_tree [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.865 189300 DEBUG nova.compute.provider_tree [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Updating resource provider d10a9930-4504-4222-97f7-6727a5a2d43b generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.905 189300 DEBUG nova.compute.resource_tracker [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.905 189300 DEBUG oslo_concurrency.lockutils [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.905 189300 DEBUG nova.service [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.971 189300 DEBUG nova.service [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 28 12:47:51 np0005539065 nova_compute[189296]: 2025-11-28 17:47:51.971 189300 DEBUG nova.servicegroup.drivers.db [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 28 12:47:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:47:52.585 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:47:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:47:52.585 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:47:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:47:52.585 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:47:52 np0005539065 nova_compute[189296]: 2025-11-28 17:47:52.972 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:47:52 np0005539065 nova_compute[189296]: 2025-11-28 17:47:52.992 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:47:54 np0005539065 systemd-logind[790]: New session 26 of user zuul.
Nov 28 12:47:54 np0005539065 systemd[1]: Started Session 26 of User zuul.
Nov 28 12:47:55 np0005539065 podman[189766]: 2025-11-28 17:47:55.395970755 +0000 UTC m=+0.055393467 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 28 12:47:55 np0005539065 python3.9[189803]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:47:57 np0005539065 python3.9[189965]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:47:57 np0005539065 systemd[1]: Reloading.
Nov 28 12:47:57 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:47:57 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:47:58 np0005539065 python3.9[190149]: ansible-ansible.builtin.service_facts Invoked
Nov 28 12:47:58 np0005539065 network[190166]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 28 12:47:58 np0005539065 network[190167]: 'network-scripts' will be removed from distribution in near future.
Nov 28 12:47:58 np0005539065 network[190168]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 28 12:48:02 np0005539065 podman[190414]: 2025-11-28 17:48:02.301262916 +0000 UTC m=+0.100647254 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:48:02 np0005539065 python3.9[190456]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:48:03 np0005539065 python3.9[190618]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:03 np0005539065 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 12:48:04 np0005539065 python3.9[190771]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:05 np0005539065 python3.9[190923]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:48:05 np0005539065 python3.9[191075]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 28 12:48:06 np0005539065 python3.9[191227]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:48:06 np0005539065 systemd[1]: Reloading.
Nov 28 12:48:06 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:48:06 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:48:07 np0005539065 python3.9[191414]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:48:08 np0005539065 python3.9[191567]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:48:09 np0005539065 python3.9[191717]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:48:09 np0005539065 python3.9[191869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:10 np0005539065 python3.9[191990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352089.296714-133-49094507162749/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:48:11 np0005539065 python3.9[192142]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Nov 28 12:48:12 np0005539065 python3.9[192294]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 28 12:48:13 np0005539065 python3.9[192447]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 28 12:48:14 np0005539065 python3.9[192605]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 28 12:48:15 np0005539065 python3.9[192763]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:15 np0005539065 python3.9[192884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764352094.8656466-201-25558167949154/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:16 np0005539065 python3.9[193034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:16 np0005539065 python3.9[193155]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764352095.9241502-201-200954208123872/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:17 np0005539065 python3.9[193305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:17 np0005539065 python3.9[193426]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764352097.1328328-201-145033635719456/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:18 np0005539065 python3.9[193576]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:48:19 np0005539065 python3.9[193728]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:48:19 np0005539065 python3.9[193880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:20 np0005539065 python3.9[194001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352099.346526-260-185296522460394/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:20 np0005539065 python3.9[194151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:21 np0005539065 podman[194201]: 2025-11-28 17:48:21.280947871 +0000 UTC m=+0.079147410 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 28 12:48:21 np0005539065 python3.9[194239]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:22 np0005539065 python3.9[194397]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:22 np0005539065 python3.9[194518]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352101.6047983-260-12154106637732/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:23 np0005539065 python3.9[194668]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:23 np0005539065 python3.9[194789]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352102.8463662-260-216527132387026/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:24 np0005539065 python3.9[194939]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:24 np0005539065 python3.9[195060]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352104.039044-260-131673268745427/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:25 np0005539065 python3.9[195210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:25 np0005539065 podman[195305]: 2025-11-28 17:48:25.827008447 +0000 UTC m=+0.046514455 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:48:25 np0005539065 python3.9[195344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352105.0591383-260-38308182386962/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:26 np0005539065 python3.9[195500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:27 np0005539065 python3.9[195621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352106.091189-260-270408321702863/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:27 np0005539065 python3.9[195771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:28 np0005539065 python3.9[195892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352107.1510937-260-277145743943673/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:29 np0005539065 python3.9[196042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:29 np0005539065 python3.9[196163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352108.6062098-260-19991347930203/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:30 np0005539065 python3.9[196313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:30 np0005539065 python3.9[196434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352109.7426012-260-276189854705250/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:31 np0005539065 python3.9[196584]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:31 np0005539065 python3.9[196705]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352110.7862246-260-8763126018777/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:32 np0005539065 python3.9[196855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:32 np0005539065 podman[196856]: 2025-11-28 17:48:32.56509786 +0000 UTC m=+0.122971191 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 12:48:32 np0005539065 python3.9[196957]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:33 np0005539065 python3.9[197107]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:34 np0005539065 python3.9[197183]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:34 np0005539065 python3.9[197333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:35 np0005539065 python3.9[197409]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:35 np0005539065 python3.9[197561]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:36 np0005539065 python3.9[197713]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:37 np0005539065 python3.9[197865]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:48:38 np0005539065 python3.9[198017]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:48:38 np0005539065 systemd[1]: Reloading.
Nov 28 12:48:38 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:48:38 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:48:38 np0005539065 systemd[1]: Listening on Podman API Socket.
Nov 28 12:48:39 np0005539065 python3.9[198207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:39 np0005539065 python3.9[198330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352118.8442886-482-90931769268341/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:48:40 np0005539065 python3.9[198406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:40 np0005539065 python3.9[198529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352118.8442886-482-90931769268341/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:48:41 np0005539065 python3.9[198681]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Nov 28 12:48:42 np0005539065 python3.9[198833]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:48:43 np0005539065 python3[198985]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:48:44 np0005539065 podman[199023]: 2025-11-28 17:48:44.119194394 +0000 UTC m=+0.051215740 container create 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=f26160204c78771e78cdd2489258319b, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 28 12:48:44 np0005539065 podman[199023]: 2025-11-28 17:48:44.094906772 +0000 UTC m=+0.026928138 image pull e473677aab0cdc2c7c03a6e756cd02c6bfc4f008b09c67064c39f2682bdecd39 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 28 12:48:44 np0005539065 python3[198985]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Nov 28 12:48:45 np0005539065 python3.9[199214]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:48:45 np0005539065 python3.9[199368]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:46 np0005539065 python3.9[199519]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764352125.7770767-546-239014849833275/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:47 np0005539065 python3.9[199595]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:48:47 np0005539065 systemd[1]: Reloading.
Nov 28 12:48:47 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:48:47 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:48:48 np0005539065 python3.9[199705]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:48:48 np0005539065 systemd[1]: Reloading.
Nov 28 12:48:48 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:48:48 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.627 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.642 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.643 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.644 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.644 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.645 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.645 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.646 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.646 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.647 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.674 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.676 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.677 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.677 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 12:48:48 np0005539065 systemd[1]: Starting ceilometer_agent_compute container...
Nov 28 12:48:48 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:48:48 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447c5c556de25e598cd131f3cd03c30216b5b5afaf2dafc2ca635174f67438e4/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:48 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447c5c556de25e598cd131f3cd03c30216b5b5afaf2dafc2ca635174f67438e4/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:48 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447c5c556de25e598cd131f3cd03c30216b5b5afaf2dafc2ca635174f67438e4/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:48 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447c5c556de25e598cd131f3cd03c30216b5b5afaf2dafc2ca635174f67438e4/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.845 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.846 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6033MB free_disk=72.61135864257812GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.846 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.846 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:48:48 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066.
Nov 28 12:48:48 np0005539065 podman[199745]: 2025-11-28 17:48:48.86734673 +0000 UTC m=+0.127637575 container init 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: + sudo -E kolla_set_configs
Nov 28 12:48:48 np0005539065 podman[199745]: 2025-11-28 17:48:48.890875263 +0000 UTC m=+0.151166088 container start 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:48:48 np0005539065 podman[199745]: ceilometer_agent_compute
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: sudo: unable to send audit message: Operation not permitted
Nov 28 12:48:48 np0005539065 systemd[1]: Started ceilometer_agent_compute container.
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.923 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.924 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.952 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 12:48:48 np0005539065 podman[199766]: 2025-11-28 17:48:48.978005159 +0000 UTC m=+0.077070181 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.977 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.980 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 12:48:48 np0005539065 nova_compute[189296]: 2025-11-28 17:48:48.980 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Validating config file
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Copying service configuration files
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 28 12:48:48 np0005539065 systemd[1]: 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066-17f8452037a60959.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 28 12:48:48 np0005539065 systemd[1]: 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066-17f8452037a60959.service: Failed with result 'exit-code'.
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: INFO:__main__:Writing out command to execute
Nov 28 12:48:48 np0005539065 ceilometer_agent_compute[199760]: ++ cat /run_command
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: + ARGS=
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: + sudo kolla_copy_cacerts
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: sudo: unable to send audit message: Operation not permitted
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: + [[ ! -n '' ]]
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: + . kolla_extend_start
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: + umask 0022
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 28 12:48:49 np0005539065 python3.9[199943]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.903 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.904 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.904 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.904 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.904 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.904 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.904 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.904 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.905 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.906 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.906 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.906 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.906 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.906 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.906 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.906 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.906 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.906 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.906 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 systemd[1]: Stopping ceilometer_agent_compute container...
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.907 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.908 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.909 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.910 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.911 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.912 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.913 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.914 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.915 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.916 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.916 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.916 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.916 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.916 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.937 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.937 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.937 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.937 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.937 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.938 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.939 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.940 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.941 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.942 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.943 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.943 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.943 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.943 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.943 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.943 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.943 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.943 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.943 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.943 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.944 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.945 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.946 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.947 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.948 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.948 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.948 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.948 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.948 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.948 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.948 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.950 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.952 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.953 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 28 12:48:49 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:49.973 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.075 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.075 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.075 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.135 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.143 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.143 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.144 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.256 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.256 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.256 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.256 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.256 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.256 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.256 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.257 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.257 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.257 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.257 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.257 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.257 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.257 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.257 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.257 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.257 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.258 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.259 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.260 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.261 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.262 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.264 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.269 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.269 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.269 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.269 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.269 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.269 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.269 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.269 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Nov 28 12:48:50 np0005539065 virtqemud[189019]: End of file while reading data: Input/output error
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[199760]: 2025-11-28 17:48:50.278 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Nov 28 12:48:50 np0005539065 systemd[1]: libpod-210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066.scope: Deactivated successfully.
Nov 28 12:48:50 np0005539065 systemd[1]: libpod-210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066.scope: Consumed 1.758s CPU time.
Nov 28 12:48:50 np0005539065 podman[199947]: 2025-11-28 17:48:50.626788015 +0000 UTC m=+0.701289837 container died 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 12:48:50 np0005539065 systemd[1]: 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066-17f8452037a60959.timer: Deactivated successfully.
Nov 28 12:48:50 np0005539065 systemd[1]: Stopped /usr/bin/podman healthcheck run 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066.
Nov 28 12:48:50 np0005539065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066-userdata-shm.mount: Deactivated successfully.
Nov 28 12:48:50 np0005539065 systemd[1]: var-lib-containers-storage-overlay-447c5c556de25e598cd131f3cd03c30216b5b5afaf2dafc2ca635174f67438e4-merged.mount: Deactivated successfully.
Nov 28 12:48:50 np0005539065 podman[199947]: 2025-11-28 17:48:50.676437246 +0000 UTC m=+0.750939068 container cleanup 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 28 12:48:50 np0005539065 podman[199947]: ceilometer_agent_compute
Nov 28 12:48:50 np0005539065 podman[199990]: ceilometer_agent_compute
Nov 28 12:48:50 np0005539065 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Nov 28 12:48:50 np0005539065 systemd[1]: Stopped ceilometer_agent_compute container.
Nov 28 12:48:50 np0005539065 systemd[1]: Starting ceilometer_agent_compute container...
Nov 28 12:48:50 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:48:50 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447c5c556de25e598cd131f3cd03c30216b5b5afaf2dafc2ca635174f67438e4/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:50 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447c5c556de25e598cd131f3cd03c30216b5b5afaf2dafc2ca635174f67438e4/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:50 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447c5c556de25e598cd131f3cd03c30216b5b5afaf2dafc2ca635174f67438e4/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:50 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/447c5c556de25e598cd131f3cd03c30216b5b5afaf2dafc2ca635174f67438e4/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:50 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066.
Nov 28 12:48:50 np0005539065 podman[200004]: 2025-11-28 17:48:50.865334664 +0000 UTC m=+0.098261609 container init 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=f26160204c78771e78cdd2489258319b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: + sudo -E kolla_set_configs
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: sudo: unable to send audit message: Operation not permitted
Nov 28 12:48:50 np0005539065 podman[200004]: 2025-11-28 17:48:50.888631971 +0000 UTC m=+0.121558896 container start 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 12:48:50 np0005539065 podman[200004]: ceilometer_agent_compute
Nov 28 12:48:50 np0005539065 systemd[1]: Started ceilometer_agent_compute container.
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Validating config file
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Copying service configuration files
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: INFO:__main__:Writing out command to execute
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: ++ cat /run_command
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: + ARGS=
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: + sudo kolla_copy_cacerts
Nov 28 12:48:50 np0005539065 podman[200027]: 2025-11-28 17:48:50.953009162 +0000 UTC m=+0.051603150 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: sudo: unable to send audit message: Operation not permitted
Nov 28 12:48:50 np0005539065 systemd[1]: 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066-3d66e4f5fc7d6a26.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:48:50 np0005539065 systemd[1]: 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066-3d66e4f5fc7d6a26.service: Failed with result 'exit-code'.
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: + [[ ! -n '' ]]
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: + . kolla_extend_start
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: + umask 0022
Nov 28 12:48:50 np0005539065 ceilometer_agent_compute[200020]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 28 12:48:51 np0005539065 podman[200175]: 2025-11-28 17:48:51.443742672 +0000 UTC m=+0.062442114 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 12:48:51 np0005539065 python3.9[200222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.712 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.712 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.712 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.712 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.712 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.713 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.713 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.713 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.713 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.713 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.713 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.713 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.713 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.713 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.713 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.714 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.714 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.714 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.714 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.714 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.714 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.714 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.714 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.714 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.714 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.715 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.716 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.717 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.718 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.721 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.722 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.723 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.725 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.725 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.725 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.725 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.725 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.725 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.745 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.745 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.745 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.746 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.747 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.748 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.749 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.750 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.751 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.755 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.757 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.759 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.759 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.770 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.777 15 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.778 15 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.778 15 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.945 15 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.946 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.946 15 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.946 15 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.946 15 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.946 15 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.946 15 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.946 15 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.946 15 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.946 15 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.946 15 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.947 15 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.948 15 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.949 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.950 15 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.951 15 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.952 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.953 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.954 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.955 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.956 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.957 15 DEBUG cotyledon._service [-] Run service AgentManager(0) [15] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.960 15 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.972 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.973 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.973 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.973 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.974 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.974 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.974 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.974 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.974 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.974 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.978 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.978 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.982 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.983 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:48:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:48:52 np0005539065 python3.9[200359]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352131.1055102-578-213148704822967/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:48:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:48:52.585 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:48:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:48:52.586 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:48:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:48:52.586 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:48:53 np0005539065 python3.9[200511]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Nov 28 12:48:53 np0005539065 python3.9[200663]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:48:54 np0005539065 python3[200815]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:48:54 np0005539065 podman[200848]: 2025-11-28 17:48:54.972229408 +0000 UTC m=+0.044720542 container create 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 12:48:54 np0005539065 podman[200848]: 2025-11-28 17:48:54.947127846 +0000 UTC m=+0.019618990 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 28 12:48:54 np0005539065 python3[200815]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Nov 28 12:48:55 np0005539065 python3.9[201035]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:48:56 np0005539065 podman[201068]: 2025-11-28 17:48:56.01404889 +0000 UTC m=+0.058899338 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 12:48:56 np0005539065 python3.9[201208]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:57 np0005539065 python3.9[201359]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764352136.4594495-631-137091634629863/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:48:57 np0005539065 python3.9[201435]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:48:57 np0005539065 systemd[1]: Reloading.
Nov 28 12:48:57 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:48:57 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:48:58 np0005539065 python3.9[201546]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:48:58 np0005539065 systemd[1]: Reloading.
Nov 28 12:48:58 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:48:58 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:48:58 np0005539065 systemd[1]: Starting node_exporter container...
Nov 28 12:48:58 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:48:58 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13a75e2f2873f94bb4ecf3c86bfafe627305f7330e683271f16ca1d029ca121b/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:58 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13a75e2f2873f94bb4ecf3c86bfafe627305f7330e683271f16ca1d029ca121b/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:58 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc.
Nov 28 12:48:58 np0005539065 podman[201586]: 2025-11-28 17:48:58.879955784 +0000 UTC m=+0.114743560 container init 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.894Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.894Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.894Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.894Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.894Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=arp
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=bcache
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=bonding
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=cpu
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=edac
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=filefd
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=netclass
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=netdev
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=netstat
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=nfs
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=nvme
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=softnet
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=systemd
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=xfs
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.895Z caller=node_exporter.go:117 level=info collector=zfs
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.896Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 28 12:48:58 np0005539065 node_exporter[201601]: ts=2025-11-28T17:48:58.896Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 28 12:48:58 np0005539065 podman[201586]: 2025-11-28 17:48:58.913422961 +0000 UTC m=+0.148210727 container start 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 12:48:58 np0005539065 podman[201586]: node_exporter
Nov 28 12:48:58 np0005539065 systemd[1]: Started node_exporter container.
Nov 28 12:48:58 np0005539065 podman[201611]: 2025-11-28 17:48:58.976793417 +0000 UTC m=+0.053731663 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 12:48:59 np0005539065 python3.9[201784]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:48:59 np0005539065 systemd[1]: Stopping node_exporter container...
Nov 28 12:48:59 np0005539065 systemd[1]: libpod-28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc.scope: Deactivated successfully.
Nov 28 12:48:59 np0005539065 podman[201788]: 2025-11-28 17:48:59.77640668 +0000 UTC m=+0.065916459 container died 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 12:48:59 np0005539065 systemd[1]: 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc-688d02c38542fd24.timer: Deactivated successfully.
Nov 28 12:48:59 np0005539065 systemd[1]: Stopped /usr/bin/podman healthcheck run 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc.
Nov 28 12:48:59 np0005539065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc-userdata-shm.mount: Deactivated successfully.
Nov 28 12:48:59 np0005539065 systemd[1]: var-lib-containers-storage-overlay-13a75e2f2873f94bb4ecf3c86bfafe627305f7330e683271f16ca1d029ca121b-merged.mount: Deactivated successfully.
Nov 28 12:48:59 np0005539065 podman[201788]: 2025-11-28 17:48:59.815837941 +0000 UTC m=+0.105347720 container cleanup 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 12:48:59 np0005539065 podman[201788]: node_exporter
Nov 28 12:48:59 np0005539065 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 28 12:48:59 np0005539065 podman[201816]: node_exporter
Nov 28 12:48:59 np0005539065 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Nov 28 12:48:59 np0005539065 systemd[1]: Stopped node_exporter container.
Nov 28 12:48:59 np0005539065 systemd[1]: Starting node_exporter container...
Nov 28 12:48:59 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:48:59 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13a75e2f2873f94bb4ecf3c86bfafe627305f7330e683271f16ca1d029ca121b/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:59 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13a75e2f2873f94bb4ecf3c86bfafe627305f7330e683271f16ca1d029ca121b/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 28 12:48:59 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc.
Nov 28 12:48:59 np0005539065 podman[201828]: 2025-11-28 17:48:59.995631727 +0000 UTC m=+0.104129761 container init 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.010Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.010Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.010Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.011Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.011Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.011Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.011Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.011Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.011Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=arp
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=bcache
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=bonding
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=cpu
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=edac
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=filefd
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=netclass
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=netdev
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=netstat
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=nfs
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=nvme
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=softnet
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=systemd
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=xfs
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=node_exporter.go:117 level=info collector=zfs
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.012Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 28 12:49:00 np0005539065 node_exporter[201843]: ts=2025-11-28T17:49:00.013Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 28 12:49:00 np0005539065 podman[201828]: 2025-11-28 17:49:00.036588216 +0000 UTC m=+0.145086240 container start 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 12:49:00 np0005539065 podman[201828]: node_exporter
Nov 28 12:49:00 np0005539065 systemd[1]: Started node_exporter container.
Nov 28 12:49:00 np0005539065 podman[201852]: 2025-11-28 17:49:00.088460391 +0000 UTC m=+0.045568123 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 12:49:00 np0005539065 python3.9[202027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:49:01 np0005539065 python3.9[202150]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352140.2588062-663-252242330681286/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:49:01 np0005539065 auditd[700]: Audit daemon rotating log files
Nov 28 12:49:01 np0005539065 python3.9[202304]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Nov 28 12:49:02 np0005539065 python3.9[202456]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:49:03 np0005539065 podman[202504]: 2025-11-28 17:49:03.004867917 +0000 UTC m=+0.069542338 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:49:03 np0005539065 python3[202635]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:49:04 np0005539065 podman[202649]: 2025-11-28 17:49:04.83390166 +0000 UTC m=+1.294273270 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 28 12:49:04 np0005539065 podman[202744]: 2025-11-28 17:49:04.95608374 +0000 UTC m=+0.041398551 container create 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible)
Nov 28 12:49:04 np0005539065 podman[202744]: 2025-11-28 17:49:04.936045761 +0000 UTC m=+0.021360602 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 28 12:49:04 np0005539065 python3[202635]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Nov 28 12:49:05 np0005539065 python3.9[202934]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:49:06 np0005539065 python3.9[203088]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:49:07 np0005539065 python3.9[203239]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764352146.5156624-716-19219974136345/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:49:07 np0005539065 python3.9[203315]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:49:07 np0005539065 systemd[1]: Reloading.
Nov 28 12:49:07 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:49:07 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:49:08 np0005539065 python3.9[203427]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:49:08 np0005539065 systemd[1]: Reloading.
Nov 28 12:49:08 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:49:08 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:49:08 np0005539065 systemd[1]: Starting podman_exporter container...
Nov 28 12:49:08 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:49:08 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20019ea275edf7be748bf00752df41caf3ccc3e0307909a5e94d778592254a89/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 28 12:49:08 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20019ea275edf7be748bf00752df41caf3ccc3e0307909a5e94d778592254a89/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 28 12:49:08 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95.
Nov 28 12:49:08 np0005539065 podman[203468]: 2025-11-28 17:49:08.981486516 +0000 UTC m=+0.112704540 container init 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 12:49:08 np0005539065 podman_exporter[203484]: ts=2025-11-28T17:49:08.996Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 28 12:49:08 np0005539065 podman_exporter[203484]: ts=2025-11-28T17:49:08.996Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 28 12:49:08 np0005539065 podman_exporter[203484]: ts=2025-11-28T17:49:08.996Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 28 12:49:08 np0005539065 podman_exporter[203484]: ts=2025-11-28T17:49:08.996Z caller=handler.go:105 level=info collector=container
Nov 28 12:49:09 np0005539065 systemd[1]: Starting Podman API Service...
Nov 28 12:49:09 np0005539065 systemd[1]: Started Podman API Service.
Nov 28 12:49:09 np0005539065 podman[203468]: 2025-11-28 17:49:09.02226037 +0000 UTC m=+0.153478374 container start 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 12:49:09 np0005539065 podman[203468]: podman_exporter
Nov 28 12:49:09 np0005539065 systemd[1]: Started podman_exporter container.
Nov 28 12:49:09 np0005539065 podman[203494]: time="2025-11-28T17:49:09Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 28 12:49:09 np0005539065 podman[203494]: time="2025-11-28T17:49:09Z" level=info msg="Setting parallel job count to 25"
Nov 28 12:49:09 np0005539065 podman[203494]: time="2025-11-28T17:49:09Z" level=info msg="Using sqlite as database backend"
Nov 28 12:49:09 np0005539065 podman[203494]: time="2025-11-28T17:49:09Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 28 12:49:09 np0005539065 podman[203494]: time="2025-11-28T17:49:09Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 28 12:49:09 np0005539065 podman[203494]: time="2025-11-28T17:49:09Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 28 12:49:09 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:49:09 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 28 12:49:09 np0005539065 podman[203494]: time="2025-11-28T17:49:09Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 12:49:09 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:49:09 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19586 "" "Go-http-client/1.1"
Nov 28 12:49:09 np0005539065 podman[203493]: 2025-11-28 17:49:09.098028209 +0000 UTC m=+0.070495571 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 12:49:09 np0005539065 podman_exporter[203484]: ts=2025-11-28T17:49:09.098Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 28 12:49:09 np0005539065 podman_exporter[203484]: ts=2025-11-28T17:49:09.099Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 28 12:49:09 np0005539065 podman_exporter[203484]: ts=2025-11-28T17:49:09.099Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 28 12:49:09 np0005539065 systemd[1]: 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95-644e5aa323da11d6.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:49:09 np0005539065 systemd[1]: 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95-644e5aa323da11d6.service: Failed with result 'exit-code'.
Nov 28 12:49:09 np0005539065 python3.9[203681]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:49:09 np0005539065 systemd[1]: Stopping podman_exporter container...
Nov 28 12:49:10 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:49:09 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Nov 28 12:49:10 np0005539065 systemd[1]: libpod-27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95.scope: Deactivated successfully.
Nov 28 12:49:10 np0005539065 podman[203685]: 2025-11-28 17:49:10.040245301 +0000 UTC m=+0.055376772 container died 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 12:49:10 np0005539065 systemd[1]: 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95-644e5aa323da11d6.timer: Deactivated successfully.
Nov 28 12:49:10 np0005539065 systemd[1]: Stopped /usr/bin/podman healthcheck run 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95.
Nov 28 12:49:10 np0005539065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95-userdata-shm.mount: Deactivated successfully.
Nov 28 12:49:10 np0005539065 systemd[1]: var-lib-containers-storage-overlay-20019ea275edf7be748bf00752df41caf3ccc3e0307909a5e94d778592254a89-merged.mount: Deactivated successfully.
Nov 28 12:49:10 np0005539065 podman[203685]: 2025-11-28 17:49:10.364888279 +0000 UTC m=+0.380019770 container cleanup 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 12:49:10 np0005539065 podman[203685]: podman_exporter
Nov 28 12:49:10 np0005539065 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 28 12:49:10 np0005539065 podman[203712]: podman_exporter
Nov 28 12:49:10 np0005539065 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Nov 28 12:49:10 np0005539065 systemd[1]: Stopped podman_exporter container.
Nov 28 12:49:10 np0005539065 systemd[1]: Starting podman_exporter container...
Nov 28 12:49:10 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:49:10 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20019ea275edf7be748bf00752df41caf3ccc3e0307909a5e94d778592254a89/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 28 12:49:10 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20019ea275edf7be748bf00752df41caf3ccc3e0307909a5e94d778592254a89/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 28 12:49:10 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95.
Nov 28 12:49:10 np0005539065 podman[203725]: 2025-11-28 17:49:10.57194401 +0000 UTC m=+0.118121202 container init 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 12:49:10 np0005539065 podman_exporter[203741]: ts=2025-11-28T17:49:10.589Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 28 12:49:10 np0005539065 podman_exporter[203741]: ts=2025-11-28T17:49:10.589Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 28 12:49:10 np0005539065 podman_exporter[203741]: ts=2025-11-28T17:49:10.589Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 28 12:49:10 np0005539065 podman_exporter[203741]: ts=2025-11-28T17:49:10.589Z caller=handler.go:105 level=info collector=container
Nov 28 12:49:10 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:49:10 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 28 12:49:10 np0005539065 podman[203494]: time="2025-11-28T17:49:10Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 12:49:10 np0005539065 podman[203725]: 2025-11-28 17:49:10.604560086 +0000 UTC m=+0.150737268 container start 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 12:49:10 np0005539065 podman[203725]: podman_exporter
Nov 28 12:49:10 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:49:10 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19588 "" "Go-http-client/1.1"
Nov 28 12:49:10 np0005539065 podman_exporter[203741]: ts=2025-11-28T17:49:10.611Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 28 12:49:10 np0005539065 podman_exporter[203741]: ts=2025-11-28T17:49:10.611Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 28 12:49:10 np0005539065 podman_exporter[203741]: ts=2025-11-28T17:49:10.612Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 28 12:49:10 np0005539065 systemd[1]: Started podman_exporter container.
Nov 28 12:49:10 np0005539065 podman[203751]: 2025-11-28 17:49:10.700389063 +0000 UTC m=+0.086298336 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 12:49:11 np0005539065 python3.9[203929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:49:11 np0005539065 python3.9[204052]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352150.8431456-748-275560058782540/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:49:12 np0005539065 python3.9[204204]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Nov 28 12:49:13 np0005539065 python3.9[204356]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:49:14 np0005539065 python3[204508]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:49:16 np0005539065 podman[204521]: 2025-11-28 17:49:16.94255934 +0000 UTC m=+2.430529716 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 28 12:49:17 np0005539065 podman[204617]: 2025-11-28 17:49:17.069201729 +0000 UTC m=+0.042792965 container create 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, distribution-scope=public)
Nov 28 12:49:17 np0005539065 podman[204617]: 2025-11-28 17:49:17.04464321 +0000 UTC m=+0.018234446 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 28 12:49:17 np0005539065 python3[204508]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 28 12:49:17 np0005539065 python3.9[204807]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:49:18 np0005539065 python3.9[204961]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:49:19 np0005539065 python3.9[205112]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764352158.5412502-801-276312568261690/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:49:19 np0005539065 python3.9[205188]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:49:19 np0005539065 systemd[1]: Reloading.
Nov 28 12:49:19 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:49:19 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:49:20 np0005539065 python3.9[205299]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:49:20 np0005539065 systemd[1]: Reloading.
Nov 28 12:49:20 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:49:20 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:49:20 np0005539065 systemd[1]: Starting openstack_network_exporter container...
Nov 28 12:49:21 np0005539065 podman[205337]: 2025-11-28 17:49:21.041830728 +0000 UTC m=+0.052069741 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:49:21 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:49:21 np0005539065 systemd[1]: 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066-3d66e4f5fc7d6a26.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:49:21 np0005539065 systemd[1]: 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066-3d66e4f5fc7d6a26.service: Failed with result 'exit-code'.
Nov 28 12:49:21 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd763002d83e0f38253299fca4ff416f39ca978f1f63389d8bd994591a698580/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 28 12:49:21 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd763002d83e0f38253299fca4ff416f39ca978f1f63389d8bd994591a698580/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 28 12:49:21 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd763002d83e0f38253299fca4ff416f39ca978f1f63389d8bd994591a698580/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 28 12:49:21 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13.
Nov 28 12:49:21 np0005539065 podman[205339]: 2025-11-28 17:49:21.102483507 +0000 UTC m=+0.108661351 container init 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *bridge.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *coverage.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *datapath.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *iface.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *memory.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *ovnnorthd.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *ovn.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *ovsdbserver.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *pmd_perf.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *pmd_rxq.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: INFO    17:49:21 main.go:48: registering *vswitch.Collector
Nov 28 12:49:21 np0005539065 openstack_network_exporter[205370]: NOTICE  17:49:21 main.go:76: listening on https://:9105/metrics
Nov 28 12:49:21 np0005539065 podman[205339]: 2025-11-28 17:49:21.123261954 +0000 UTC m=+0.129439778 container start 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 28 12:49:21 np0005539065 podman[205339]: openstack_network_exporter
Nov 28 12:49:21 np0005539065 systemd[1]: Started openstack_network_exporter container.
Nov 28 12:49:21 np0005539065 podman[205380]: 2025-11-28 17:49:21.191657022 +0000 UTC m=+0.058149630 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 28 12:49:21 np0005539065 podman[205526]: 2025-11-28 17:49:21.61588568 +0000 UTC m=+0.057425542 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 12:49:21 np0005539065 python3.9[205574]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:49:21 np0005539065 systemd[1]: Stopping openstack_network_exporter container...
Nov 28 12:49:22 np0005539065 systemd[1]: libpod-051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13.scope: Deactivated successfully.
Nov 28 12:49:22 np0005539065 podman[205578]: 2025-11-28 17:49:22.02135359 +0000 UTC m=+0.053118076 container died 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, version=9.6, vcs-type=git, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 12:49:22 np0005539065 systemd[1]: 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13-367a4fd00693f859.timer: Deactivated successfully.
Nov 28 12:49:22 np0005539065 systemd[1]: Stopped /usr/bin/podman healthcheck run 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13.
Nov 28 12:49:22 np0005539065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13-userdata-shm.mount: Deactivated successfully.
Nov 28 12:49:22 np0005539065 systemd[1]: var-lib-containers-storage-overlay-dd763002d83e0f38253299fca4ff416f39ca978f1f63389d8bd994591a698580-merged.mount: Deactivated successfully.
Nov 28 12:49:23 np0005539065 podman[205578]: 2025-11-28 17:49:23.103281219 +0000 UTC m=+1.135045705 container cleanup 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=ubi9-minimal)
Nov 28 12:49:23 np0005539065 podman[205578]: openstack_network_exporter
Nov 28 12:49:23 np0005539065 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 28 12:49:23 np0005539065 podman[205605]: openstack_network_exporter
Nov 28 12:49:23 np0005539065 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Nov 28 12:49:23 np0005539065 systemd[1]: Stopped openstack_network_exporter container.
Nov 28 12:49:23 np0005539065 systemd[1]: Starting openstack_network_exporter container...
Nov 28 12:49:23 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:49:23 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd763002d83e0f38253299fca4ff416f39ca978f1f63389d8bd994591a698580/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 28 12:49:23 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd763002d83e0f38253299fca4ff416f39ca978f1f63389d8bd994591a698580/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 28 12:49:23 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd763002d83e0f38253299fca4ff416f39ca978f1f63389d8bd994591a698580/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 28 12:49:23 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13.
Nov 28 12:49:23 np0005539065 podman[205616]: 2025-11-28 17:49:23.295955089 +0000 UTC m=+0.100095473 container init 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *bridge.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *coverage.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *datapath.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *iface.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *memory.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *ovnnorthd.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *ovn.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *ovsdbserver.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *pmd_perf.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *pmd_rxq.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: INFO    17:49:23 main.go:48: registering *vswitch.Collector
Nov 28 12:49:23 np0005539065 openstack_network_exporter[205632]: NOTICE  17:49:23 main.go:76: listening on https://:9105/metrics
Nov 28 12:49:23 np0005539065 podman[205616]: 2025-11-28 17:49:23.320423055 +0000 UTC m=+0.124563430 container start 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, release=1755695350, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 28 12:49:23 np0005539065 podman[205616]: openstack_network_exporter
Nov 28 12:49:23 np0005539065 systemd[1]: Started openstack_network_exporter container.
Nov 28 12:49:23 np0005539065 podman[205642]: 2025-11-28 17:49:23.38082258 +0000 UTC m=+0.050767990 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, release=1755695350, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 12:49:23 np0005539065 python3.9[205814]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 28 12:49:24 np0005539065 python3.9[205966]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 28 12:49:25 np0005539065 python3.9[206131]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:49:25 np0005539065 systemd[1]: Started libpod-conmon-3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3.scope.
Nov 28 12:49:25 np0005539065 podman[206132]: 2025-11-28 17:49:25.992701907 +0000 UTC m=+0.067605010 container exec 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 28 12:49:26 np0005539065 podman[206132]: 2025-11-28 17:49:26.026504882 +0000 UTC m=+0.101407975 container exec_died 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 28 12:49:26 np0005539065 systemd[1]: libpod-conmon-3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3.scope: Deactivated successfully.
Nov 28 12:49:26 np0005539065 podman[206163]: 2025-11-28 17:49:26.120168976 +0000 UTC m=+0.043341158 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 28 12:49:26 np0005539065 python3.9[206334]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:49:26 np0005539065 systemd[1]: Started libpod-conmon-3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3.scope.
Nov 28 12:49:26 np0005539065 podman[206335]: 2025-11-28 17:49:26.768786387 +0000 UTC m=+0.078282061 container exec 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 28 12:49:26 np0005539065 podman[206335]: 2025-11-28 17:49:26.807503442 +0000 UTC m=+0.116999126 container exec_died 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 28 12:49:26 np0005539065 systemd[1]: libpod-conmon-3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3.scope: Deactivated successfully.
Nov 28 12:49:27 np0005539065 python3.9[206519]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:49:28 np0005539065 python3.9[206671]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 28 12:49:29 np0005539065 python3.9[206837]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:49:29 np0005539065 systemd[1]: Started libpod-conmon-b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f.scope.
Nov 28 12:49:29 np0005539065 podman[206838]: 2025-11-28 17:49:29.110815691 +0000 UTC m=+0.084267572 container exec b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 28 12:49:29 np0005539065 podman[206838]: 2025-11-28 17:49:29.146450648 +0000 UTC m=+0.119902489 container exec_died b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 28 12:49:29 np0005539065 systemd[1]: libpod-conmon-b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f.scope: Deactivated successfully.
Nov 28 12:49:29 np0005539065 python3.9[207020]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:49:29 np0005539065 systemd[1]: Started libpod-conmon-b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f.scope.
Nov 28 12:49:30 np0005539065 podman[207021]: 2025-11-28 17:49:30.011608722 +0000 UTC m=+0.092584814 container exec b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 12:49:30 np0005539065 podman[207021]: 2025-11-28 17:49:30.040516446 +0000 UTC m=+0.121492518 container exec_died b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:50:17 np0005539065 python3.9[215370]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:50:17 np0005539065 rsyslogd[1006]: imjournal: 316 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 28 12:50:18 np0005539065 python3.9[215523]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:19 np0005539065 python3.9[215675]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:20 np0005539065 python3.9[215827]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:50:21 np0005539065 python3.9[215979]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 28 12:50:21 np0005539065 podman[216103]: 2025-11-28 17:50:21.733946358 +0000 UTC m=+0.060938214 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm)
Nov 28 12:50:21 np0005539065 podman[216149]: 2025-11-28 17:50:21.990558162 +0000 UTC m=+0.050463149 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 28 12:50:21 np0005539065 python3.9[216147]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:50:22 np0005539065 systemd[1]: Reloading.
Nov 28 12:50:22 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:50:22 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:50:22 np0005539065 python3.9[216355]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:50:23 np0005539065 python3.9[216508]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:50:23 np0005539065 podman[216510]: 2025-11-28 17:50:23.620215981 +0000 UTC m=+0.069537602 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 28 12:50:24 np0005539065 python3.9[216680]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:50:24 np0005539065 python3.9[216832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:25 np0005539065 python3.9[216953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352224.4779298-125-237317638978420/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:50:26 np0005539065 python3.9[217105]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 28 12:50:26 np0005539065 podman[217131]: 2025-11-28 17:50:26.979789261 +0000 UTC m=+0.043809817 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 28 12:50:27 np0005539065 python3.9[217275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:28 np0005539065 python3.9[217396]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764352227.1544356-171-128769258978863/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:28 np0005539065 python3.9[217546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:29 np0005539065 python3.9[217667]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764352228.411395-171-214234344292175/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:29 np0005539065 podman[203494]: time="2025-11-28T17:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 12:50:29 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22540 "" "Go-http-client/1.1"
Nov 28 12:50:29 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3405 "" "Go-http-client/1.1"
Nov 28 12:50:29 np0005539065 python3.9[217817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:30 np0005539065 python3.9[217938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764352229.4332767-171-19849092171387/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:30 np0005539065 podman[218062]: 2025-11-28 17:50:30.895050573 +0000 UTC m=+0.044721680 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 12:50:31 np0005539065 python3.9[218103]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:50:31 np0005539065 openstack_network_exporter[205632]: ERROR   17:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 12:50:31 np0005539065 openstack_network_exporter[205632]: ERROR   17:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 12:50:31 np0005539065 openstack_network_exporter[205632]: ERROR   17:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 12:50:31 np0005539065 openstack_network_exporter[205632]: ERROR   17:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 12:50:31 np0005539065 openstack_network_exporter[205632]: 
Nov 28 12:50:31 np0005539065 openstack_network_exporter[205632]: ERROR   17:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 12:50:31 np0005539065 openstack_network_exporter[205632]: 
Nov 28 12:50:31 np0005539065 python3.9[218263]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:50:32 np0005539065 python3.9[218415]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:32 np0005539065 python3.9[218536]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352231.7987006-230-157144963766513/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:33 np0005539065 python3.9[218686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:33 np0005539065 python3.9[218762]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:34 np0005539065 podman[218868]: 2025-11-28 17:50:34.014949838 +0000 UTC m=+0.079506640 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 28 12:50:34 np0005539065 python3.9[218933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:34 np0005539065 python3.9[219057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352233.732071-230-255796714082450/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:35 np0005539065 python3.9[219207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:35 np0005539065 python3.9[219328]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352234.8242054-230-101535979988518/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:36 np0005539065 python3.9[219478]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:36 np0005539065 python3.9[219599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352235.8358805-230-10286755336990/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:37 np0005539065 python3.9[219749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:37 np0005539065 python3.9[219870]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352236.808354-230-18616731567791/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:38 np0005539065 python3.9[220020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:38 np0005539065 python3.9[220096]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:39 np0005539065 python3.9[220248]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:39 np0005539065 python3.9[220400]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:40 np0005539065 python3.9[220552]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:50:41 np0005539065 python3.9[220704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:41 np0005539065 python3.9[220827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352240.73637-349-167052649493749/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:50:42 np0005539065 python3.9[220903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:42 np0005539065 podman[220998]: 2025-11-28 17:50:42.416972889 +0000 UTC m=+0.052067275 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 12:50:42 np0005539065 python3.9[221050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352240.73637-349-167052649493749/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:50:43 np0005539065 python3.9[221202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:50:43 np0005539065 python3.9[221325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764352242.7381735-349-259167115591589/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 28 12:50:44 np0005539065 python3.9[221477]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Nov 28 12:50:45 np0005539065 python3.9[221629]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:50:46 np0005539065 python3[221781]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:50:46 np0005539065 podman[221816]: 2025-11-28 17:50:46.597537589 +0000 UTC m=+0.046927595 container create fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 28 12:50:46 np0005539065 podman[221816]: 2025-11-28 17:50:46.570880692 +0000 UTC m=+0.020270728 image pull 743c1960518ee2a8df257b87dd40a31faa57a99c6d0aa394baae4cd418c3c2b2 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 28 12:50:46 np0005539065 python3[221781]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Nov 28 12:50:47 np0005539065 python3.9[222006]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:50:47 np0005539065 python3.9[222160]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:48 np0005539065 nova_compute[189296]: 2025-11-28 17:50:48.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:50:48 np0005539065 nova_compute[189296]: 2025-11-28 17:50:48.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 12:50:48 np0005539065 nova_compute[189296]: 2025-11-28 17:50:48.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 12:50:48 np0005539065 nova_compute[189296]: 2025-11-28 17:50:48.638 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 12:50:48 np0005539065 nova_compute[189296]: 2025-11-28 17:50:48.638 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:50:48 np0005539065 python3.9[222311]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764352248.048114-427-97403979125855/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:49 np0005539065 python3.9[222387]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:50:49 np0005539065 systemd[1]: Reloading.
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.702 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.703 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.703 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.703 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 12:50:49 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:50:49 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.838 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.838 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5867MB free_disk=72.4410629272461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.839 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.839 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.959 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.959 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.978 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.995 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.996 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 12:50:49 np0005539065 nova_compute[189296]: 2025-11-28 17:50:49.997 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:50:50 np0005539065 python3.9[222497]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:50:50 np0005539065 systemd[1]: Reloading.
Nov 28 12:50:50 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:50:50 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:50:50 np0005539065 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 28 12:50:50 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:50:50 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ddf517531c61b6219eff20068b6699a849e5808ba3c6b477c7c30e546a2756/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 28 12:50:50 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ddf517531c61b6219eff20068b6699a849e5808ba3c6b477c7c30e546a2756/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 28 12:50:50 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ddf517531c61b6219eff20068b6699a849e5808ba3c6b477c7c30e546a2756/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 28 12:50:50 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ddf517531c61b6219eff20068b6699a849e5808ba3c6b477c7c30e546a2756/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 28 12:50:50 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1.
Nov 28 12:50:50 np0005539065 podman[222538]: 2025-11-28 17:50:50.944231046 +0000 UTC m=+0.100858783 container init fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 28 12:50:50 np0005539065 ceilometer_agent_ipmi[222554]: + sudo -E kolla_set_configs
Nov 28 12:50:50 np0005539065 podman[222538]: 2025-11-28 17:50:50.970169276 +0000 UTC m=+0.126796993 container start fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 28 12:50:50 np0005539065 podman[222538]: ceilometer_agent_ipmi
Nov 28 12:50:50 np0005539065 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 28 12:50:50 np0005539065 nova_compute[189296]: 2025-11-28 17:50:50.997 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:50:50 np0005539065 nova_compute[189296]: 2025-11-28 17:50:50.998 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:50:50 np0005539065 nova_compute[189296]: 2025-11-28 17:50:50.998 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Validating config file
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Copying service configuration files
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: INFO:__main__:Writing out command to execute
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: ++ cat /run_command
Nov 28 12:50:51 np0005539065 podman[222561]: 2025-11-28 17:50:51.026125402 +0000 UTC m=+0.043533725 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: + ARGS=
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: + sudo kolla_copy_cacerts
Nov 28 12:50:51 np0005539065 systemd[1]: fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1-22cddb87018f765.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:50:51 np0005539065 systemd[1]: fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1-22cddb87018f765.service: Failed with result 'exit-code'.
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: + [[ ! -n '' ]]
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: + . kolla_extend_start
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: + umask 0022
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 28 12:50:51 np0005539065 nova_compute[189296]: 2025-11-28 17:50:51.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:50:51 np0005539065 nova_compute[189296]: 2025-11-28 17:50:51.623 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:50:51 np0005539065 python3.9[222737]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.888 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.889 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.889 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.889 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.889 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.889 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.889 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.889 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.889 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.889 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.890 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.890 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.890 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.890 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.890 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.890 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.890 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.890 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.890 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.891 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.892 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.893 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.894 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.895 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.896 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.897 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.898 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.898 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.898 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.898 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.898 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.898 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.898 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.898 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.898 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.898 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.899 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.900 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.900 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.900 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.900 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.900 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.900 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.900 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.900 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.900 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.900 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.901 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.901 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.901 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.901 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.901 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.901 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.901 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.901 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.901 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.902 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.903 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.904 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.904 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.904 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.921 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.922 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 28 12:50:51 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:51.923 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.973 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.974 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.974 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.974 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.974 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.977 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.978 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.978 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.984 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:51 np0005539065 ceilometer_agent_compute[200020]: 2025-11-28 17:50:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.022 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpstfk5cmn/privsep.sock']
Nov 28 12:50:52 np0005539065 podman[222781]: 2025-11-28 17:50:52.053905646 +0000 UTC m=+0.098862517 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Nov 28 12:50:52 np0005539065 podman[222855]: 2025-11-28 17:50:52.097091432 +0000 UTC m=+0.051705617 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:50:52 np0005539065 python3.9[222935]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 28 12:50:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:50:52.587 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:50:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:50:52.588 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:50:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:50:52.588 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.609 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.610 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpstfk5cmn/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.515 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.519 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.521 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.521 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 28 12:50:52 np0005539065 nova_compute[189296]: 2025-11-28 17:50:52.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:50:52 np0005539065 nova_compute[189296]: 2025-11-28 17:50:52.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.709 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.709 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.710 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.710 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.710 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.710 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.710 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.711 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.711 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.711 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.711 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.711 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.711 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.714 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.714 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.714 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.714 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.715 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.715 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.715 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.715 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.715 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.715 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.715 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.715 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.715 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.716 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.716 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.716 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.716 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.716 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.716 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.717 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.717 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.717 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.717 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.717 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.717 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.717 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.717 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.718 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.718 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.718 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.718 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.718 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.718 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.718 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.718 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.718 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.718 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.719 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.719 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.719 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.719 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.719 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.720 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.721 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.721 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.721 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.721 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.721 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.721 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.721 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.721 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.721 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.721 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.722 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.722 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.722 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.722 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.722 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.722 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.722 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.722 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.722 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.723 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.723 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.723 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.723 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.723 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.723 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.723 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.723 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.724 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.724 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.724 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.724 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.724 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.724 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.724 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.724 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.724 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.725 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.725 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.725 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.725 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.725 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.725 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.725 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.725 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.725 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.725 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.726 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.726 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.726 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.726 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.726 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.726 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.726 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.726 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.726 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.726 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.727 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.727 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.727 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.727 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.727 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.727 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.727 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.727 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.727 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.728 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.728 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.728 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.728 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.728 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.728 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.728 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.728 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.728 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.729 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.729 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.729 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.729 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.729 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.729 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.729 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.729 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.730 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.731 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.731 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.731 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.731 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.731 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.731 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.731 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.731 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.731 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.732 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.732 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.732 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.732 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.732 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.732 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.732 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.732 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.732 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.732 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.733 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.733 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.733 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.733 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.733 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.733 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.733 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.733 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.733 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.733 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.734 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.734 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.734 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.734 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.734 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.734 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.734 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.734 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.734 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.734 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.735 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.735 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.735 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.735 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.735 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.735 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.735 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.735 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.735 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.735 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.736 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.736 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.736 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.736 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.736 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.736 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.736 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.736 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.736 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.736 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 28 12:50:52 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:52.739 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 28 12:50:53 np0005539065 python3[223093]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Nov 28 12:50:53 np0005539065 podman[223125]: 2025-11-28 17:50:53.375597803 +0000 UTC m=+0.048688736 container create f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9)
Nov 28 12:50:53 np0005539065 podman[223125]: 2025-11-28 17:50:53.349312125 +0000 UTC m=+0.022403078 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 28 12:50:53 np0005539065 python3[223093]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Nov 28 12:50:53 np0005539065 podman[223287]: 2025-11-28 17:50:53.941204307 +0000 UTC m=+0.056519250 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 28 12:50:54 np0005539065 python3.9[223335]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:50:54 np0005539065 python3.9[223489]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:55 np0005539065 python3.9[223640]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764352254.8604486-489-124088457465318/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:50:55 np0005539065 python3.9[223716]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 28 12:50:55 np0005539065 systemd[1]: Reloading.
Nov 28 12:50:56 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:50:56 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:50:56 np0005539065 python3.9[223827]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 28 12:50:56 np0005539065 systemd[1]: Reloading.
Nov 28 12:50:56 np0005539065 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 28 12:50:56 np0005539065 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 28 12:50:57 np0005539065 systemd[1]: Starting kepler container...
Nov 28 12:50:57 np0005539065 podman[223865]: 2025-11-28 17:50:57.244937903 +0000 UTC m=+0.060017023 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 28 12:50:57 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:50:57 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7.
Nov 28 12:50:57 np0005539065 podman[223867]: 2025-11-28 17:50:57.322818084 +0000 UTC m=+0.116778607 container init f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=edpm, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 12:50:57 np0005539065 podman[223867]: 2025-11-28 17:50:57.350748301 +0000 UTC m=+0.144708814 container start f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0)
Nov 28 12:50:57 np0005539065 kepler[223901]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 28 12:50:57 np0005539065 podman[223867]: kepler
Nov 28 12:50:57 np0005539065 systemd[1]: Started kepler container.
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.359694       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.359892       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.359905       1 config.go:295] kernel version: 5.14
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.361408       1 power.go:78] Unable to obtain power, use estimate method
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.361426       1 redfish.go:169] failed to get redfish credential file path
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.361743       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.361753       1 power.go:79] using none to obtain power
Nov 28 12:50:57 np0005539065 kepler[223901]: E1128 17:50:57.361766       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 28 12:50:57 np0005539065 kepler[223901]: E1128 17:50:57.361783       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 28 12:50:57 np0005539065 kepler[223901]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.363370       1 exporter.go:84] Number of CPUs: 8
Nov 28 12:50:57 np0005539065 podman[223906]: 2025-11-28 17:50:57.422056378 +0000 UTC m=+0.063181977 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0)
Nov 28 12:50:57 np0005539065 systemd[1]: f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7-4f3b004813d1825a.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:50:57 np0005539065 systemd[1]: f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7-4f3b004813d1825a.service: Failed with result 'exit-code'.
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.907400       1 watcher.go:83] Using in cluster k8s config
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.907437       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 28 12:50:57 np0005539065 kepler[223901]: E1128 17:50:57.907546       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.913589       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.913622       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.917398       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.917430       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.924645       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.924686       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.924702       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.931697       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.931736       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.931742       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.931747       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.931754       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.931769       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.931853       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.931880       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.931900       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.931918       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.932066       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 28 12:50:57 np0005539065 kepler[223901]: I1128 17:50:57.932591       1 exporter.go:208] Started Kepler in 573.0763ms
Nov 28 12:50:58 np0005539065 python3.9[224085]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:50:58 np0005539065 systemd[1]: Stopping ceilometer_agent_ipmi container...
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:58.197 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:58.299 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:58.299 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:58.299 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[222554]: 2025-11-28 17:50:58.308 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Nov 28 12:50:58 np0005539065 systemd[1]: libpod-fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1.scope: Deactivated successfully.
Nov 28 12:50:58 np0005539065 systemd[1]: libpod-fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1.scope: Consumed 2.020s CPU time.
Nov 28 12:50:58 np0005539065 podman[224099]: 2025-11-28 17:50:58.471961343 +0000 UTC m=+0.320194503 container died fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:50:58 np0005539065 systemd[1]: fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1-22cddb87018f765.timer: Deactivated successfully.
Nov 28 12:50:58 np0005539065 systemd[1]: Stopped /usr/bin/podman healthcheck run fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1.
Nov 28 12:50:58 np0005539065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1-userdata-shm.mount: Deactivated successfully.
Nov 28 12:50:58 np0005539065 systemd[1]: var-lib-containers-storage-overlay-b4ddf517531c61b6219eff20068b6699a849e5808ba3c6b477c7c30e546a2756-merged.mount: Deactivated successfully.
Nov 28 12:50:58 np0005539065 podman[224099]: 2025-11-28 17:50:58.533455299 +0000 UTC m=+0.381688459 container cleanup fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 28 12:50:58 np0005539065 podman[224099]: ceilometer_agent_ipmi
Nov 28 12:50:58 np0005539065 podman[224127]: ceilometer_agent_ipmi
Nov 28 12:50:58 np0005539065 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Nov 28 12:50:58 np0005539065 systemd[1]: Stopped ceilometer_agent_ipmi container.
Nov 28 12:50:58 np0005539065 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 28 12:50:58 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:50:58 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ddf517531c61b6219eff20068b6699a849e5808ba3c6b477c7c30e546a2756/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 28 12:50:58 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ddf517531c61b6219eff20068b6699a849e5808ba3c6b477c7c30e546a2756/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 28 12:50:58 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ddf517531c61b6219eff20068b6699a849e5808ba3c6b477c7c30e546a2756/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 28 12:50:58 np0005539065 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ddf517531c61b6219eff20068b6699a849e5808ba3c6b477c7c30e546a2756/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 28 12:50:58 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1.
Nov 28 12:50:58 np0005539065 podman[224139]: 2025-11-28 17:50:58.804874513 +0000 UTC m=+0.177100937 container init fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: + sudo -E kolla_set_configs
Nov 28 12:50:58 np0005539065 podman[224139]: 2025-11-28 17:50:58.836453315 +0000 UTC m=+0.208679719 container start fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 28 12:50:58 np0005539065 podman[224139]: ceilometer_agent_ipmi
Nov 28 12:50:58 np0005539065 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Validating config file
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Copying service configuration files
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: INFO:__main__:Writing out command to execute
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: ++ cat /run_command
Nov 28 12:50:58 np0005539065 podman[224162]: 2025-11-28 17:50:58.906353579 +0000 UTC m=+0.060329029 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: + ARGS=
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: + sudo kolla_copy_cacerts
Nov 28 12:50:58 np0005539065 systemd[1]: fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1-71ba8aa7043fb9c6.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:50:58 np0005539065 systemd[1]: fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1-71ba8aa7043fb9c6.service: Failed with result 'exit-code'.
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: + [[ ! -n '' ]]
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: + . kolla_extend_start
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: + umask 0022
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 28 12:50:58 np0005539065 ceilometer_agent_ipmi[224153]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 28 12:50:59 np0005539065 python3.9[224338]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 12:50:59 np0005539065 systemd[1]: Stopping kepler container...
Nov 28 12:50:59 np0005539065 podman[203494]: time="2025-11-28T17:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 12:50:59 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28293 "" "Go-http-client/1.1"
Nov 28 12:50:59 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4255 "" "Go-http-client/1.1"
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.784 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.784 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.784 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.784 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.785 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.785 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.785 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.785 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.785 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.785 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.785 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.785 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.786 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.786 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.786 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.786 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.786 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.786 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.786 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.786 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.787 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.787 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.787 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.787 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.787 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.787 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.787 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.787 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.787 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.788 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.788 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.788 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.788 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.788 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.788 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.788 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.788 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.788 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.788 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.789 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.789 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.789 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.789 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.789 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.789 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.789 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.789 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.789 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.789 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.790 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.790 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.790 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.790 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.790 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.790 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.790 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.790 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.790 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.790 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.791 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.791 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.791 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.791 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.791 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.791 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.791 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.791 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.791 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.791 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.792 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.792 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.792 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.792 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.792 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.792 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.792 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.792 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.792 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.793 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.793 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.793 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.793 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.793 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.793 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.793 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.793 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.793 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.794 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.794 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.794 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.794 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.794 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.794 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.794 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.794 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.795 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.795 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.795 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.795 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.795 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.795 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.795 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.796 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.796 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.796 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.796 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.796 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.796 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.796 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.797 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.797 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.797 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.797 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.797 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.797 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.797 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.798 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.798 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.798 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.798 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.798 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.798 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.798 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.799 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.799 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.799 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.799 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.799 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.799 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.799 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.799 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.800 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.800 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.800 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.800 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.800 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.800 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.800 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.800 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.800 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.800 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.801 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.801 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.801 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.801 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.801 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.801 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.801 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.801 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.801 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.802 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.802 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.802 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.802 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.802 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.802 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.802 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.803 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.803 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.803 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.803 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.803 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.803 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.803 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.803 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 28 12:50:59 np0005539065 kepler[223901]: I1128 17:50:59.811188       1 exporter.go:218] Received shutdown signal
Nov 28 12:50:59 np0005539065 kepler[223901]: I1128 17:50:59.811385       1 exporter.go:226] Exiting...
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.822 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.823 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.824 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 28 12:50:59 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:50:59.837 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmppher0ppd/privsep.sock']
Nov 28 12:50:59 np0005539065 systemd[1]: libpod-f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7.scope: Deactivated successfully.
Nov 28 12:51:00 np0005539065 podman[224342]: 2025-11-28 17:51:00.00414548 +0000 UTC m=+0.250282087 container died f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler)
Nov 28 12:51:00 np0005539065 systemd[1]: f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7-4f3b004813d1825a.timer: Deactivated successfully.
Nov 28 12:51:00 np0005539065 systemd[1]: Stopped /usr/bin/podman healthcheck run f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7.
Nov 28 12:51:00 np0005539065 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7-userdata-shm.mount: Deactivated successfully.
Nov 28 12:51:00 np0005539065 systemd[1]: var-lib-containers-storage-overlay-dd60b09fa5e78bcbec1d58740bf0958ebe94d1c817c819152ff47ae5bf0a9bf9-merged.mount: Deactivated successfully.
Nov 28 12:51:00 np0005539065 podman[224342]: 2025-11-28 17:51:00.055509019 +0000 UTC m=+0.301645616 container cleanup f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, config_id=edpm, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Nov 28 12:51:00 np0005539065 podman[224342]: kepler
Nov 28 12:51:00 np0005539065 podman[224372]: kepler
Nov 28 12:51:00 np0005539065 systemd[1]: edpm_kepler.service: Deactivated successfully.
Nov 28 12:51:00 np0005539065 systemd[1]: Stopped kepler container.
Nov 28 12:51:00 np0005539065 systemd[1]: Starting kepler container...
Nov 28 12:51:00 np0005539065 systemd[1]: Started libcrun container.
Nov 28 12:51:00 np0005539065 systemd[1]: Started /usr/bin/podman healthcheck run f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7.
Nov 28 12:51:00 np0005539065 podman[224387]: 2025-11-28 17:51:00.245118898 +0000 UTC m=+0.099114231 container init f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, io.buildah.version=1.29.0, container_name=kepler, vcs-type=git, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 28 12:51:00 np0005539065 kepler[224403]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 28 12:51:00 np0005539065 podman[224387]: 2025-11-28 17:51:00.26603617 +0000 UTC m=+0.120031493 container start f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=kepler, name=ubi9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, architecture=x86_64, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 28 12:51:00 np0005539065 podman[224387]: kepler
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.273883       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.274186       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.274215       1 config.go:295] kernel version: 5.14
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.274924       1 power.go:78] Unable to obtain power, use estimate method
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.274948       1 redfish.go:169] failed to get redfish credential file path
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.275353       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.275368       1 power.go:79] using none to obtain power
Nov 28 12:51:00 np0005539065 kepler[224403]: E1128 17:51:00.275383       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 28 12:51:00 np0005539065 kepler[224403]: E1128 17:51:00.275406       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 28 12:51:00 np0005539065 systemd[1]: Started kepler container.
Nov 28 12:51:00 np0005539065 kepler[224403]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.278170       1 exporter.go:84] Number of CPUs: 8
Nov 28 12:51:00 np0005539065 podman[224413]: 2025-11-28 17:51:00.323866301 +0000 UTC m=+0.047358135 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, release-0.7.12=, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543)
Nov 28 12:51:00 np0005539065 systemd[1]: f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7-4010f717b5ccd1d5.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:51:00 np0005539065 systemd[1]: f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7-4010f717b5ccd1d5.service: Failed with result 'exit-code'.
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.498 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.499 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmppher0ppd/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.383 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.387 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.389 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.389 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.596 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.596 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.597 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.597 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.597 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.598 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.598 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.601 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.601 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.601 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.601 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.601 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.601 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.602 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.602 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.602 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.602 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.602 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.602 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.602 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.602 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.602 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.602 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.603 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.603 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.603 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.603 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.603 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.603 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.603 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.603 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.603 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.603 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.604 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.605 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.605 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.605 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.605 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.605 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.605 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.605 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.605 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.605 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.605 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.606 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.607 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.607 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.607 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.607 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.607 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.607 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.607 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.607 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.607 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.608 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.609 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.609 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.609 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.609 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.609 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.609 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.609 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.609 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.609 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.609 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.610 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.611 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.611 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.611 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.611 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.611 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.611 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.611 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.611 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.611 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.611 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.612 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.613 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.614 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.615 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.616 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.617 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.618 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.619 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.619 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.619 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.619 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.619 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.619 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.619 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 28 12:51:00 np0005539065 ceilometer_agent_ipmi[224153]: 2025-11-28 17:51:00.621 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.801002       1 watcher.go:83] Using in cluster k8s config
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.801037       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 28 12:51:00 np0005539065 kepler[224403]: E1128 17:51:00.801082       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.804946       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.804978       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.808136       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.808161       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.814244       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.814271       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.814282       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.820959       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.820994       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.820999       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.821003       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.821008       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.821018       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.821167       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.821194       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.821215       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.821230       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.821324       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 28 12:51:00 np0005539065 kepler[224403]: I1128 17:51:00.821785       1 exporter.go:208] Started Kepler in 548.106761ms
Nov 28 12:51:00 np0005539065 python3.9[224592]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 28 12:51:00 np0005539065 podman[224624]: 2025-11-28 17:51:00.992568479 +0000 UTC m=+0.055907146 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 12:51:01 np0005539065 openstack_network_exporter[205632]: ERROR   17:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 12:51:01 np0005539065 openstack_network_exporter[205632]: ERROR   17:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 12:51:01 np0005539065 openstack_network_exporter[205632]: ERROR   17:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 12:51:01 np0005539065 openstack_network_exporter[205632]: ERROR   17:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 12:51:01 np0005539065 openstack_network_exporter[205632]: 
Nov 28 12:51:01 np0005539065 openstack_network_exporter[205632]: ERROR   17:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 12:51:01 np0005539065 openstack_network_exporter[205632]: 
Nov 28 12:51:01 np0005539065 python3.9[224778]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 28 12:51:02 np0005539065 python3.9[224941]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:03 np0005539065 systemd[1]: Started libpod-conmon-3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3.scope.
Nov 28 12:51:03 np0005539065 podman[224942]: 2025-11-28 17:51:03.082534856 +0000 UTC m=+0.086667829 container exec 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:51:03 np0005539065 podman[224942]: 2025-11-28 17:51:03.114425417 +0000 UTC m=+0.118558370 container exec_died 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 28 12:51:03 np0005539065 systemd[1]: libpod-conmon-3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3.scope: Deactivated successfully.
Nov 28 12:51:03 np0005539065 python3.9[225121]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:03 np0005539065 systemd[1]: Started libpod-conmon-3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3.scope.
Nov 28 12:51:03 np0005539065 podman[225122]: 2025-11-28 17:51:03.980227651 +0000 UTC m=+0.084870428 container exec 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 28 12:51:04 np0005539065 podman[225122]: 2025-11-28 17:51:04.012016398 +0000 UTC m=+0.116659165 container exec_died 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:51:04 np0005539065 systemd[1]: libpod-conmon-3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3.scope: Deactivated successfully.
Nov 28 12:51:04 np0005539065 podman[225150]: 2025-11-28 17:51:04.170416064 +0000 UTC m=+0.099721056 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 12:51:04 np0005539065 python3.9[225327]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:05 np0005539065 python3.9[225479]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 28 12:51:06 np0005539065 python3.9[225641]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:06 np0005539065 systemd[1]: Started libpod-conmon-b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f.scope.
Nov 28 12:51:06 np0005539065 podman[225642]: 2025-11-28 17:51:06.392095869 +0000 UTC m=+0.088416491 container exec b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:51:06 np0005539065 podman[225642]: 2025-11-28 17:51:06.397739542 +0000 UTC m=+0.094060144 container exec_died b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 12:51:06 np0005539065 systemd[1]: libpod-conmon-b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f.scope: Deactivated successfully.
Nov 28 12:51:07 np0005539065 python3.9[225825]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:07 np0005539065 systemd[1]: Started libpod-conmon-b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f.scope.
Nov 28 12:51:07 np0005539065 podman[225826]: 2025-11-28 17:51:07.219663905 +0000 UTC m=+0.085146124 container exec b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 28 12:51:07 np0005539065 podman[225826]: 2025-11-28 17:51:07.251250517 +0000 UTC m=+0.116732726 container exec_died b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 28 12:51:07 np0005539065 systemd[1]: libpod-conmon-b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f.scope: Deactivated successfully.
Nov 28 12:51:08 np0005539065 python3.9[226008]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:09 np0005539065 python3.9[226160]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 28 12:51:10 np0005539065 python3.9[226325]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:10 np0005539065 systemd[1]: Started libpod-conmon-bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc.scope.
Nov 28 12:51:10 np0005539065 podman[226326]: 2025-11-28 17:51:10.237791615 +0000 UTC m=+0.101723523 container exec bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 28 12:51:10 np0005539065 podman[226326]: 2025-11-28 17:51:10.27243484 +0000 UTC m=+0.136366718 container exec_died bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 12:51:10 np0005539065 systemd[1]: libpod-conmon-bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc.scope: Deactivated successfully.
Nov 28 12:51:11 np0005539065 python3.9[226509]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:11 np0005539065 systemd[1]: Started libpod-conmon-bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc.scope.
Nov 28 12:51:11 np0005539065 podman[226510]: 2025-11-28 17:51:11.194014976 +0000 UTC m=+0.094879883 container exec bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 28 12:51:11 np0005539065 podman[226510]: 2025-11-28 17:51:11.224520254 +0000 UTC m=+0.125385141 container exec_died bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 12:51:11 np0005539065 systemd[1]: libpod-conmon-bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc.scope: Deactivated successfully.
Nov 28 12:51:12 np0005539065 python3.9[226689]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:12 np0005539065 podman[226813]: 2025-11-28 17:51:12.668557209 +0000 UTC m=+0.073745505 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 12:51:12 np0005539065 python3.9[226863]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 28 12:51:13 np0005539065 python3.9[227027]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:13 np0005539065 systemd[1]: Started libpod-conmon-210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066.scope.
Nov 28 12:51:13 np0005539065 podman[227028]: 2025-11-28 17:51:13.794876661 +0000 UTC m=+0.112208590 container exec 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 28 12:51:13 np0005539065 podman[227028]: 2025-11-28 17:51:13.826842273 +0000 UTC m=+0.144174162 container exec_died 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 28 12:51:13 np0005539065 systemd[1]: libpod-conmon-210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066.scope: Deactivated successfully.
Nov 28 12:51:14 np0005539065 python3.9[227209]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:14 np0005539065 systemd[1]: Started libpod-conmon-210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066.scope.
Nov 28 12:51:14 np0005539065 podman[227210]: 2025-11-28 17:51:14.665302073 +0000 UTC m=+0.082967932 container exec 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=f26160204c78771e78cdd2489258319b, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 28 12:51:14 np0005539065 podman[227210]: 2025-11-28 17:51:14.697807739 +0000 UTC m=+0.115473608 container exec_died 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 28 12:51:14 np0005539065 systemd[1]: libpod-conmon-210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066.scope: Deactivated successfully.
Nov 28 12:51:15 np0005539065 python3.9[227391]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:16 np0005539065 python3.9[227543]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 28 12:51:17 np0005539065 python3.9[227707]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:17 np0005539065 systemd[1]: Started libpod-conmon-28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc.scope.
Nov 28 12:51:17 np0005539065 podman[227708]: 2025-11-28 17:51:17.344334696 +0000 UTC m=+0.087853197 container exec 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 12:51:17 np0005539065 podman[227708]: 2025-11-28 17:51:17.375590271 +0000 UTC m=+0.119108752 container exec_died 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 12:51:17 np0005539065 systemd[1]: libpod-conmon-28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc.scope: Deactivated successfully.
Nov 28 12:51:18 np0005539065 python3.9[227888]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:18 np0005539065 systemd[1]: Started libpod-conmon-28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc.scope.
Nov 28 12:51:18 np0005539065 podman[227889]: 2025-11-28 17:51:18.258414756 +0000 UTC m=+0.082386819 container exec 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 12:51:18 np0005539065 podman[227889]: 2025-11-28 17:51:18.289530378 +0000 UTC m=+0.113502431 container exec_died 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 12:51:18 np0005539065 systemd[1]: libpod-conmon-28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc.scope: Deactivated successfully.
Nov 28 12:51:19 np0005539065 python3.9[228069]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:19 np0005539065 python3.9[228222]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 28 12:51:20 np0005539065 python3.9[228385]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:20 np0005539065 systemd[1]: Started libpod-conmon-27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95.scope.
Nov 28 12:51:20 np0005539065 podman[228386]: 2025-11-28 17:51:20.831296071 +0000 UTC m=+0.081661021 container exec 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 12:51:20 np0005539065 podman[228386]: 2025-11-28 17:51:20.862825233 +0000 UTC m=+0.113190153 container exec_died 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 12:51:20 np0005539065 systemd[1]: libpod-conmon-27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95.scope: Deactivated successfully.
Nov 28 12:51:21 np0005539065 python3.9[228567]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:21 np0005539065 systemd[1]: Started libpod-conmon-27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95.scope.
Nov 28 12:51:21 np0005539065 podman[228568]: 2025-11-28 17:51:21.804091973 +0000 UTC m=+0.081110519 container exec 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 12:51:21 np0005539065 podman[228568]: 2025-11-28 17:51:21.835292896 +0000 UTC m=+0.112311432 container exec_died 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 12:51:21 np0005539065 systemd[1]: libpod-conmon-27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95.scope: Deactivated successfully.
Nov 28 12:51:22 np0005539065 podman[228721]: 2025-11-28 17:51:22.433041326 +0000 UTC m=+0.079509661 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 28 12:51:22 np0005539065 podman[228720]: 2025-11-28 17:51:22.457812709 +0000 UTC m=+0.103041675 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=f26160204c78771e78cdd2489258319b, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 28 12:51:22 np0005539065 python3.9[228782]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:23 np0005539065 python3.9[228936]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 28 12:51:24 np0005539065 podman[229099]: 2025-11-28 17:51:24.100450464 +0000 UTC m=+0.066089415 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64)
Nov 28 12:51:24 np0005539065 python3.9[229100]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:24 np0005539065 systemd[1]: Started libpod-conmon-051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13.scope.
Nov 28 12:51:24 np0005539065 podman[229121]: 2025-11-28 17:51:24.327807033 +0000 UTC m=+0.089891207 container exec 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6)
Nov 28 12:51:24 np0005539065 podman[229121]: 2025-11-28 17:51:24.334262184 +0000 UTC m=+0.096346338 container exec_died 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64)
Nov 28 12:51:24 np0005539065 systemd[1]: libpod-conmon-051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13.scope: Deactivated successfully.
Nov 28 12:51:25 np0005539065 python3.9[229301]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:25 np0005539065 systemd[1]: Started libpod-conmon-051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13.scope.
Nov 28 12:51:25 np0005539065 podman[229302]: 2025-11-28 17:51:25.196085545 +0000 UTC m=+0.094058473 container exec 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 12:51:25 np0005539065 podman[229302]: 2025-11-28 17:51:25.227867882 +0000 UTC m=+0.125840810 container exec_died 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 28 12:51:25 np0005539065 systemd[1]: libpod-conmon-051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13.scope: Deactivated successfully.
Nov 28 12:51:26 np0005539065 python3.9[229482]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:26 np0005539065 python3.9[229634]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Nov 28 12:51:27 np0005539065 podman[229772]: 2025-11-28 17:51:27.559060863 +0000 UTC m=+0.107852298 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:51:27 np0005539065 python3.9[229813]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:27 np0005539065 systemd[1]: Started libpod-conmon-fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1.scope.
Nov 28 12:51:27 np0005539065 podman[229818]: 2025-11-28 17:51:27.890672413 +0000 UTC m=+0.125361229 container exec fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 28 12:51:27 np0005539065 podman[229818]: 2025-11-28 17:51:27.923782222 +0000 UTC m=+0.158471008 container exec_died fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Nov 28 12:51:27 np0005539065 systemd[1]: libpod-conmon-fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1.scope: Deactivated successfully.
Nov 28 12:51:28 np0005539065 python3.9[229999]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:28 np0005539065 systemd[1]: Started libpod-conmon-fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1.scope.
Nov 28 12:51:28 np0005539065 podman[230000]: 2025-11-28 17:51:28.901336084 +0000 UTC m=+0.091129524 container exec fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 28 12:51:28 np0005539065 podman[230000]: 2025-11-28 17:51:28.911848782 +0000 UTC m=+0.101642202 container exec_died fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 28 12:51:28 np0005539065 systemd[1]: libpod-conmon-fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1.scope: Deactivated successfully.
Nov 28 12:51:29 np0005539065 podman[230030]: 2025-11-28 17:51:29.093526014 +0000 UTC m=+0.092098896 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 28 12:51:29 np0005539065 systemd[1]: fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1-71ba8aa7043fb9c6.service: Main process exited, code=exited, status=1/FAILURE
Nov 28 12:51:29 np0005539065 systemd[1]: fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1-71ba8aa7043fb9c6.service: Failed with result 'exit-code'.
Nov 28 12:51:29 np0005539065 podman[203494]: time="2025-11-28T17:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 12:51:29 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 28 12:51:29 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4269 "" "Go-http-client/1.1"
Nov 28 12:51:29 np0005539065 python3.9[230200]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:30 np0005539065 podman[230324]: 2025-11-28 17:51:30.526753545 +0000 UTC m=+0.085910531 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, distribution-scope=public, config_id=edpm, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 28 12:51:30 np0005539065 python3.9[230369]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Nov 28 12:51:31 np0005539065 podman[230505]: 2025-11-28 17:51:31.396067812 +0000 UTC m=+0.072035106 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 12:51:31 np0005539065 openstack_network_exporter[205632]: ERROR   17:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 12:51:31 np0005539065 openstack_network_exporter[205632]: ERROR   17:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 12:51:31 np0005539065 openstack_network_exporter[205632]: ERROR   17:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 12:51:31 np0005539065 openstack_network_exporter[205632]: ERROR   17:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 12:51:31 np0005539065 openstack_network_exporter[205632]: 
Nov 28 12:51:31 np0005539065 openstack_network_exporter[205632]: ERROR   17:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 12:51:31 np0005539065 openstack_network_exporter[205632]: 
Nov 28 12:51:31 np0005539065 python3.9[230549]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:31 np0005539065 systemd[1]: Started libpod-conmon-f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7.scope.
Nov 28 12:51:31 np0005539065 podman[230556]: 2025-11-28 17:51:31.696134689 +0000 UTC m=+0.099793528 container exec f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, architecture=x86_64)
Nov 28 12:51:31 np0005539065 podman[230556]: 2025-11-28 17:51:31.727857136 +0000 UTC m=+0.131515945 container exec_died f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.tags=base rhel9, architecture=x86_64)
Nov 28 12:51:31 np0005539065 systemd[1]: libpod-conmon-f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7.scope: Deactivated successfully.
Nov 28 12:51:32 np0005539065 python3.9[230737]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 28 12:51:32 np0005539065 systemd[1]: Started libpod-conmon-f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7.scope.
Nov 28 12:51:32 np0005539065 podman[230738]: 2025-11-28 17:51:32.736635483 +0000 UTC m=+0.108632126 container exec f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, version=9.4, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=)
Nov 28 12:51:32 np0005539065 podman[230738]: 2025-11-28 17:51:32.769795753 +0000 UTC m=+0.141792396 container exec_died f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, distribution-scope=public, release-0.7.12=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_id=edpm, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc.)
Nov 28 12:51:32 np0005539065 systemd[1]: libpod-conmon-f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7.scope: Deactivated successfully.
Nov 28 12:51:33 np0005539065 python3.9[230919]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:34 np0005539065 podman[231071]: 2025-11-28 17:51:34.386913998 +0000 UTC m=+0.129694972 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 12:51:34 np0005539065 python3.9[231072]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:35 np0005539065 python3.9[231249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:51:35 np0005539065 python3.9[231372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764352294.6654317-844-221789835111530/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:36 np0005539065 python3.9[231524]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:37 np0005539065 python3.9[231676]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:51:37 np0005539065 python3.9[231754]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:38 np0005539065 python3.9[231906]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:51:39 np0005539065 python3.9[231984]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8bvt_gje recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:39 np0005539065 python3.9[232136]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:51:40 np0005539065 python3.9[232214]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:41 np0005539065 python3.9[232366]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:51:42 np0005539065 python3[232519]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 28 12:51:42 np0005539065 python3.9[232671]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:51:43 np0005539065 podman[232680]: 2025-11-28 17:51:43.018803646 +0000 UTC m=+0.074428632 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 12:51:43 np0005539065 python3.9[232775]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:44 np0005539065 python3.9[232927]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:51:44 np0005539065 python3.9[233005]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:45 np0005539065 python3.9[233157]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:51:45 np0005539065 python3.9[233235]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:46 np0005539065 python3.9[233387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:51:47 np0005539065 python3.9[233465]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:47 np0005539065 python3.9[233617]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:51:48 np0005539065 python3.9[233742]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764352307.388536-969-75487207673666/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:49 np0005539065 python3.9[233894]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:49 np0005539065 nova_compute[189296]: 2025-11-28 17:51:49.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:51:50 np0005539065 python3.9[234047]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:51:50 np0005539065 nova_compute[189296]: 2025-11-28 17:51:50.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:51:50 np0005539065 nova_compute[189296]: 2025-11-28 17:51:50.639 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:51:50 np0005539065 nova_compute[189296]: 2025-11-28 17:51:50.639 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 12:51:50 np0005539065 nova_compute[189296]: 2025-11-28 17:51:50.639 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 12:51:50 np0005539065 nova_compute[189296]: 2025-11-28 17:51:50.651 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 12:51:51 np0005539065 python3.9[234202]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.627 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.684 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.685 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.685 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.686 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 12:51:51 np0005539065 python3.9[234354]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.990 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.991 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5636MB free_disk=72.43985366821289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.992 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:51:51 np0005539065 nova_compute[189296]: 2025-11-28 17:51:51.992 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:51:52 np0005539065 nova_compute[189296]: 2025-11-28 17:51:52.057 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 12:51:52 np0005539065 nova_compute[189296]: 2025-11-28 17:51:52.057 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 12:51:52 np0005539065 nova_compute[189296]: 2025-11-28 17:51:52.088 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 12:51:52 np0005539065 nova_compute[189296]: 2025-11-28 17:51:52.101 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 12:51:52 np0005539065 nova_compute[189296]: 2025-11-28 17:51:52.102 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 12:51:52 np0005539065 nova_compute[189296]: 2025-11-28 17:51:52.103 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:51:52 np0005539065 python3.9[234507]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 28 12:51:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:51:52.589 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 12:51:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:51:52.589 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 12:51:52 np0005539065 ovn_metadata_agent[106619]: 2025-11-28 17:51:52.589 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 12:51:53 np0005539065 podman[234608]: 2025-11-28 17:51:53.014515437 +0000 UTC m=+0.071693035 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 28 12:51:53 np0005539065 podman[234598]: 2025-11-28 17:51:53.015274436 +0000 UTC m=+0.074129115 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 28 12:51:53 np0005539065 nova_compute[189296]: 2025-11-28 17:51:53.102 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:51:53 np0005539065 nova_compute[189296]: 2025-11-28 17:51:53.102 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:51:53 np0005539065 nova_compute[189296]: 2025-11-28 17:51:53.102 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:51:53 np0005539065 nova_compute[189296]: 2025-11-28 17:51:53.102 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 12:51:53 np0005539065 python3.9[234696]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 12:51:53 np0005539065 nova_compute[189296]: 2025-11-28 17:51:53.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 12:51:54 np0005539065 python3.9[234851]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:51:54 np0005539065 systemd[1]: session-27.scope: Deactivated successfully.
Nov 28 12:51:54 np0005539065 systemd[1]: session-27.scope: Consumed 1min 19.589s CPU time.
Nov 28 12:51:54 np0005539065 systemd-logind[790]: Session 27 logged out. Waiting for processes to exit.
Nov 28 12:51:54 np0005539065 systemd-logind[790]: Removed session 27.
Nov 28 12:51:54 np0005539065 podman[234876]: 2025-11-28 17:51:54.606563697 +0000 UTC m=+0.059193599 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Nov 28 12:51:57 np0005539065 podman[234898]: 2025-11-28 17:51:57.984594893 +0000 UTC m=+0.051919621 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 28 12:51:59 np0005539065 podman[203494]: time="2025-11-28T17:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 12:51:59 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 28 12:51:59 np0005539065 podman[203494]: @ - - [28/Nov/2025:17:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4269 "" "Go-http-client/1.1"
Nov 28 12:52:00 np0005539065 podman[234918]: 2025-11-28 17:52:00.037778815 +0000 UTC m=+0.099848903 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 28 12:52:00 np0005539065 systemd-logind[790]: New session 28 of user zuul.
Nov 28 12:52:00 np0005539065 systemd[1]: Started Session 28 of User zuul.
Nov 28 12:52:01 np0005539065 podman[235017]: 2025-11-28 17:52:01.040394675 +0000 UTC m=+0.099974787 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Nov 28 12:52:01 np0005539065 openstack_network_exporter[205632]: ERROR   17:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 12:52:01 np0005539065 openstack_network_exporter[205632]: ERROR   17:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 12:52:01 np0005539065 openstack_network_exporter[205632]: ERROR   17:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 12:52:01 np0005539065 openstack_network_exporter[205632]: ERROR   17:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 12:52:01 np0005539065 openstack_network_exporter[205632]: 
Nov 28 12:52:01 np0005539065 openstack_network_exporter[205632]: ERROR   17:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 12:52:01 np0005539065 openstack_network_exporter[205632]: 
Nov 28 12:52:01 np0005539065 python3.9[235110]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 12:52:02 np0005539065 podman[235157]: 2025-11-28 17:52:02.023433015 +0000 UTC m=+0.086762734 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 12:52:02 np0005539065 python3.9[235290]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Nov 28 12:52:03 np0005539065 python3.9[235443]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 28 12:52:04 np0005539065 podman[235499]: 2025-11-28 17:52:04.552326207 +0000 UTC m=+0.096250356 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 28 12:52:04 np0005539065 python3.9[235546]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 28 12:52:07 np0005539065 python3.9[235710]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:52:08 np0005539065 python3.9[235833]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764352327.3286617-54-192520620741144/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:52:09 np0005539065 python3.9[235985]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 12:52:10 np0005539065 python3.9[236137]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 28 12:52:11 np0005539065 python3.9[236260]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764352329.9253135-77-91534513665954/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 28 17:52:11 compute-0 python3.9[236412]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 28 17:52:11 compute-0 systemd[1]: Stopping System Logging Service...
Nov 28 17:52:12 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Nov 28 17:52:12 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Nov 28 17:52:12 compute-0 systemd[1]: Stopped System Logging Service.
Nov 28 17:52:12 compute-0 systemd[1]: rsyslog.service: Consumed 3.915s CPU time, 9.7M memory peak, read 0B from disk, written 6.1M to disk.
Nov 28 17:52:12 compute-0 systemd[1]: Starting System Logging Service...
Nov 28 17:52:12 compute-0 rsyslogd[236416]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236416" x-info="https://www.rsyslog.com"] start
Nov 28 17:52:12 compute-0 systemd[1]: Started System Logging Service.
Nov 28 17:52:12 compute-0 rsyslogd[236416]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 17:52:12 compute-0 rsyslogd[236416]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Nov 28 17:52:12 compute-0 rsyslogd[236416]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Nov 28 17:52:12 compute-0 rsyslogd[236416]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Nov 28 17:52:12 compute-0 rsyslogd[236416]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Nov 28 17:52:12 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Nov 28 17:52:12 compute-0 systemd[1]: session-28.scope: Consumed 9.323s CPU time.
Nov 28 17:52:12 compute-0 systemd-logind[790]: Session 28 logged out. Waiting for processes to exit.
Nov 28 17:52:12 compute-0 systemd-logind[790]: Removed session 28.
Nov 28 17:52:14 compute-0 podman[236445]: 2025-11-28 17:52:14.006443038 +0000 UTC m=+0.064895040 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 17:52:24 compute-0 podman[236470]: 2025-11-28 17:52:24.004471296 +0000 UTC m=+0.069469590 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Nov 28 17:52:24 compute-0 podman[236471]: 2025-11-28 17:52:24.015475285 +0000 UTC m=+0.077793654 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 28 17:52:25 compute-0 podman[236509]: 2025-11-28 17:52:25.033058131 +0000 UTC m=+0.091028238 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=)
Nov 28 17:52:29 compute-0 podman[236529]: 2025-11-28 17:52:29.001537682 +0000 UTC m=+0.066650091 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 28 17:52:29 compute-0 podman[203494]: time="2025-11-28T17:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:52:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 17:52:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4270 "" "Go-http-client/1.1"
Nov 28 17:52:31 compute-0 podman[236547]: 2025-11-28 17:52:31.013167249 +0000 UTC m=+0.071636394 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 28 17:52:31 compute-0 openstack_network_exporter[205632]: ERROR   17:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:52:31 compute-0 openstack_network_exporter[205632]: ERROR   17:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:52:31 compute-0 openstack_network_exporter[205632]: ERROR   17:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:52:31 compute-0 openstack_network_exporter[205632]: ERROR   17:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:52:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:52:31 compute-0 openstack_network_exporter[205632]: ERROR   17:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:52:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:52:31 compute-0 podman[236567]: 2025-11-28 17:52:31.999230683 +0000 UTC m=+0.066895717 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, vcs-type=git, version=9.4, config_id=edpm, release=1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 17:52:32 compute-0 podman[236585]: 2025-11-28 17:52:32.991436188 +0000 UTC m=+0.057894317 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 17:52:35 compute-0 podman[236608]: 2025-11-28 17:52:35.021602747 +0000 UTC m=+0.087130752 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 17:52:44 compute-0 podman[236632]: 2025-11-28 17:52:44.721494706 +0000 UTC m=+0.054517382 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 17:52:48 compute-0 nova_compute[189296]: 2025-11-28 17:52:48.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:48 compute-0 nova_compute[189296]: 2025-11-28 17:52:48.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 28 17:52:48 compute-0 nova_compute[189296]: 2025-11-28 17:52:48.644 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 28 17:52:48 compute-0 nova_compute[189296]: 2025-11-28 17:52:48.645 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:48 compute-0 nova_compute[189296]: 2025-11-28 17:52:48.645 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 28 17:52:48 compute-0 nova_compute[189296]: 2025-11-28 17:52:48.658 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:50 compute-0 nova_compute[189296]: 2025-11-28 17:52:50.666 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:51 compute-0 nova_compute[189296]: 2025-11-28 17:52:51.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:51 compute-0 nova_compute[189296]: 2025-11-28 17:52:51.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:51 compute-0 nova_compute[189296]: 2025-11-28 17:52:51.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 17:52:51 compute-0 nova_compute[189296]: 2025-11-28 17:52:51.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 17:52:51 compute-0 nova_compute[189296]: 2025-11-28 17:52:51.654 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 17:52:51 compute-0 nova_compute[189296]: 2025-11-28 17:52:51.655 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.973 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.974 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.974 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.974 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.976 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.977 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.977 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.977 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.977 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.977 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.977 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.977 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.978 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.978 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.978 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.978 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.985 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:52:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:52:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:52:52.590 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:52:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:52:52.591 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:52:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:52:52.591 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:52:52 compute-0 nova_compute[189296]: 2025-11-28 17:52:52.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:52 compute-0 nova_compute[189296]: 2025-11-28 17:52:52.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:52 compute-0 nova_compute[189296]: 2025-11-28 17:52:52.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 17:52:53 compute-0 nova_compute[189296]: 2025-11-28 17:52:53.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:53 compute-0 nova_compute[189296]: 2025-11-28 17:52:53.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:53 compute-0 nova_compute[189296]: 2025-11-28 17:52:53.655 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:52:53 compute-0 nova_compute[189296]: 2025-11-28 17:52:53.656 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:52:53 compute-0 nova_compute[189296]: 2025-11-28 17:52:53.656 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:52:53 compute-0 nova_compute[189296]: 2025-11-28 17:52:53.656 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 17:52:53 compute-0 nova_compute[189296]: 2025-11-28 17:52:53.964 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 17:52:53 compute-0 nova_compute[189296]: 2025-11-28 17:52:53.965 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5713MB free_disk=72.43762588500977GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 17:52:53 compute-0 nova_compute[189296]: 2025-11-28 17:52:53.965 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:52:53 compute-0 nova_compute[189296]: 2025-11-28 17:52:53.966 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.105 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.106 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.197 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing inventories for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.267 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating ProviderTree inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.267 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.280 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing aggregate associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.301 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing trait associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, traits: HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.324 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.337 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.339 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 17:52:54 compute-0 nova_compute[189296]: 2025-11-28 17:52:54.340 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:52:54 compute-0 systemd-logind[790]: New session 29 of user zuul.
Nov 28 17:52:54 compute-0 systemd[1]: Started Session 29 of User zuul.
Nov 28 17:52:54 compute-0 podman[236661]: 2025-11-28 17:52:54.46660394 +0000 UTC m=+0.065447533 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd)
Nov 28 17:52:54 compute-0 podman[236660]: 2025-11-28 17:52:54.495792667 +0000 UTC m=+0.097291823 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 28 17:52:55 compute-0 podman[236846]: 2025-11-28 17:52:55.243896876 +0000 UTC m=+0.072897060 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 17:52:55 compute-0 python3[236884]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 17:52:56 compute-0 nova_compute[189296]: 2025-11-28 17:52:56.339 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:52:57 compute-0 python3[237115]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 17:52:58 compute-0 python3[237268]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 28 17:52:59 compute-0 podman[203494]: time="2025-11-28T17:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:52:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 17:52:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4275 "" "Go-http-client/1.1"
Nov 28 17:53:00 compute-0 podman[237295]: 2025-11-28 17:53:00.017857431 +0000 UTC m=+0.083058513 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 28 17:53:00 compute-0 python3[237437]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 28 17:53:01 compute-0 podman[237563]: 2025-11-28 17:53:01.313835142 +0000 UTC m=+0.073988555 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 17:53:01 compute-0 openstack_network_exporter[205632]: ERROR   17:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:53:01 compute-0 openstack_network_exporter[205632]: ERROR   17:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:53:01 compute-0 openstack_network_exporter[205632]: ERROR   17:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:53:01 compute-0 openstack_network_exporter[205632]: ERROR   17:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:53:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:53:01 compute-0 openstack_network_exporter[205632]: ERROR   17:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:53:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:53:01 compute-0 python3[237601]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 28 17:53:03 compute-0 podman[237706]: 2025-11-28 17:53:03.062520265 +0000 UTC m=+0.111030409 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, architecture=x86_64)
Nov 28 17:53:03 compute-0 podman[237723]: 2025-11-28 17:53:03.175519112 +0000 UTC m=+0.105761655 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 17:53:06 compute-0 podman[237747]: 2025-11-28 17:53:06.043746207 +0000 UTC m=+0.105539050 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 28 17:53:14 compute-0 podman[237774]: 2025-11-28 17:53:14.990990533 +0000 UTC m=+0.052787161 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 17:53:25 compute-0 podman[237797]: 2025-11-28 17:53:25.011081897 +0000 UTC m=+0.075292216 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 28 17:53:25 compute-0 podman[237798]: 2025-11-28 17:53:25.025388799 +0000 UTC m=+0.084466646 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 28 17:53:26 compute-0 podman[237837]: 2025-11-28 17:53:26.018170986 +0000 UTC m=+0.074378496 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 17:53:29 compute-0 podman[203494]: time="2025-11-28T17:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:53:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 17:53:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4278 "" "Go-http-client/1.1"
Nov 28 17:53:31 compute-0 podman[237858]: 2025-11-28 17:53:31.039455382 +0000 UTC m=+0.093180025 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 28 17:53:31 compute-0 openstack_network_exporter[205632]: ERROR   17:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:53:31 compute-0 openstack_network_exporter[205632]: ERROR   17:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:53:31 compute-0 openstack_network_exporter[205632]: ERROR   17:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:53:31 compute-0 openstack_network_exporter[205632]: ERROR   17:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:53:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:53:31 compute-0 openstack_network_exporter[205632]: ERROR   17:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:53:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:53:32 compute-0 podman[237877]: 2025-11-28 17:53:32.015413318 +0000 UTC m=+0.072384178 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 28 17:53:34 compute-0 podman[237897]: 2025-11-28 17:53:34.03270594 +0000 UTC m=+0.084762604 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-type=git, release-0.7.12=, name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Nov 28 17:53:34 compute-0 podman[237896]: 2025-11-28 17:53:34.063623547 +0000 UTC m=+0.117132416 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 17:53:37 compute-0 podman[237937]: 2025-11-28 17:53:37.091560582 +0000 UTC m=+0.132554723 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 17:53:45 compute-0 podman[237963]: 2025-11-28 17:53:45.991140783 +0000 UTC m=+0.058530927 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 17:53:51 compute-0 nova_compute[189296]: 2025-11-28 17:53:51.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:53:51 compute-0 nova_compute[189296]: 2025-11-28 17:53:51.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 17:53:51 compute-0 nova_compute[189296]: 2025-11-28 17:53:51.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 17:53:51 compute-0 nova_compute[189296]: 2025-11-28 17:53:51.642 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 17:53:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:53:52.592 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:53:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:53:52.592 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:53:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:53:52.593 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:53:52 compute-0 nova_compute[189296]: 2025-11-28 17:53:52.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:53:53 compute-0 nova_compute[189296]: 2025-11-28 17:53:53.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:53:53 compute-0 nova_compute[189296]: 2025-11-28 17:53:53.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:53:53 compute-0 nova_compute[189296]: 2025-11-28 17:53:53.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:53:53 compute-0 nova_compute[189296]: 2025-11-28 17:53:53.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:53:53 compute-0 nova_compute[189296]: 2025-11-28 17:53:53.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 17:53:54 compute-0 nova_compute[189296]: 2025-11-28 17:53:54.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:53:54 compute-0 nova_compute[189296]: 2025-11-28 17:53:54.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:53:54 compute-0 nova_compute[189296]: 2025-11-28 17:53:54.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:53:54 compute-0 nova_compute[189296]: 2025-11-28 17:53:54.663 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:53:54 compute-0 nova_compute[189296]: 2025-11-28 17:53:54.664 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:53:54 compute-0 nova_compute[189296]: 2025-11-28 17:53:54.664 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 17:53:54 compute-0 nova_compute[189296]: 2025-11-28 17:53:54.983 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 17:53:54 compute-0 nova_compute[189296]: 2025-11-28 17:53:54.985 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5706MB free_disk=72.43742752075195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 17:53:54 compute-0 nova_compute[189296]: 2025-11-28 17:53:54.985 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:53:54 compute-0 nova_compute[189296]: 2025-11-28 17:53:54.985 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:53:55 compute-0 nova_compute[189296]: 2025-11-28 17:53:55.070 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 17:53:55 compute-0 nova_compute[189296]: 2025-11-28 17:53:55.071 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 17:53:55 compute-0 nova_compute[189296]: 2025-11-28 17:53:55.092 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 17:53:55 compute-0 nova_compute[189296]: 2025-11-28 17:53:55.105 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 17:53:55 compute-0 nova_compute[189296]: 2025-11-28 17:53:55.108 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 17:53:55 compute-0 nova_compute[189296]: 2025-11-28 17:53:55.108 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:53:56 compute-0 podman[237987]: 2025-11-28 17:53:56.037568258 +0000 UTC m=+0.093333477 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 28 17:53:56 compute-0 podman[237988]: 2025-11-28 17:53:56.039074104 +0000 UTC m=+0.084256234 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 28 17:53:56 compute-0 nova_compute[189296]: 2025-11-28 17:53:56.105 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:53:56 compute-0 podman[238023]: 2025-11-28 17:53:56.124513077 +0000 UTC m=+0.064713828 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc.)
Nov 28 17:53:56 compute-0 nova_compute[189296]: 2025-11-28 17:53:56.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:53:59 compute-0 podman[203494]: time="2025-11-28T17:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:53:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 17:53:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4272 "" "Go-http-client/1.1"
Nov 28 17:54:01 compute-0 openstack_network_exporter[205632]: ERROR   17:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:54:01 compute-0 openstack_network_exporter[205632]: ERROR   17:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:54:01 compute-0 openstack_network_exporter[205632]: ERROR   17:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:54:01 compute-0 openstack_network_exporter[205632]: ERROR   17:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:54:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:54:01 compute-0 openstack_network_exporter[205632]: ERROR   17:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:54:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:54:02 compute-0 podman[238044]: 2025-11-28 17:54:02.029393584 +0000 UTC m=+0.089397200 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 17:54:02 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Nov 28 17:54:02 compute-0 systemd[1]: session-29.scope: Consumed 6.764s CPU time.
Nov 28 17:54:02 compute-0 systemd-logind[790]: Session 29 logged out. Waiting for processes to exit.
Nov 28 17:54:02 compute-0 systemd-logind[790]: Removed session 29.
Nov 28 17:54:02 compute-0 podman[238062]: 2025-11-28 17:54:02.12683605 +0000 UTC m=+0.068788198 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm)
Nov 28 17:54:05 compute-0 podman[238081]: 2025-11-28 17:54:05.011347007 +0000 UTC m=+0.077407337 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 17:54:05 compute-0 podman[238082]: 2025-11-28 17:54:05.034921552 +0000 UTC m=+0.095481658 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, vcs-type=git, version=9.4, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm)
Nov 28 17:54:08 compute-0 podman[238119]: 2025-11-28 17:54:08.076000435 +0000 UTC m=+0.132902281 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 28 17:54:17 compute-0 podman[238144]: 2025-11-28 17:54:17.005562966 +0000 UTC m=+0.063187601 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 17:54:27 compute-0 podman[238169]: 2025-11-28 17:54:27.016739008 +0000 UTC m=+0.068789799 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 17:54:27 compute-0 podman[238168]: 2025-11-28 17:54:27.023562693 +0000 UTC m=+0.075975943 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 17:54:27 compute-0 podman[238170]: 2025-11-28 17:54:27.049631719 +0000 UTC m=+0.097547139 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 28 17:54:29 compute-0 podman[203494]: time="2025-11-28T17:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:54:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 17:54:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4280 "" "Go-http-client/1.1"
Nov 28 17:54:31 compute-0 openstack_network_exporter[205632]: ERROR   17:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:54:31 compute-0 openstack_network_exporter[205632]: ERROR   17:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:54:31 compute-0 openstack_network_exporter[205632]: ERROR   17:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:54:31 compute-0 openstack_network_exporter[205632]: ERROR   17:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:54:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:54:31 compute-0 openstack_network_exporter[205632]: ERROR   17:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:54:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:54:33 compute-0 podman[238226]: 2025-11-28 17:54:33.025599338 +0000 UTC m=+0.078110636 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 17:54:33 compute-0 podman[238225]: 2025-11-28 17:54:33.072610704 +0000 UTC m=+0.119419213 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 28 17:54:35 compute-0 podman[238260]: 2025-11-28 17:54:35.999144546 +0000 UTC m=+0.065765134 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 17:54:36 compute-0 podman[238261]: 2025-11-28 17:54:36.013072016 +0000 UTC m=+0.071274659 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Nov 28 17:54:39 compute-0 podman[238301]: 2025-11-28 17:54:39.054671463 +0000 UTC m=+0.121595816 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 28 17:54:47 compute-0 podman[238326]: 2025-11-28 17:54:47.998768358 +0000 UTC m=+0.059850299 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 17:54:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:54:48.553 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 17:54:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:54:48.554 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 17:54:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:54:48.555 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.975 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.975 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.975 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.976 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.977 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.979 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.980 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.981 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.982 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.983 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.984 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.985 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.985 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.986 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.986 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.986 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.986 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.986 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.986 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.986 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.987 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:54:51.988 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:54:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:54:52.597 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:54:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:54:52.598 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:54:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:54:52.598 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:54:53 compute-0 nova_compute[189296]: 2025-11-28 17:54:53.623 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:54:53 compute-0 nova_compute[189296]: 2025-11-28 17:54:53.623 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:54:53 compute-0 nova_compute[189296]: 2025-11-28 17:54:53.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 17:54:53 compute-0 nova_compute[189296]: 2025-11-28 17:54:53.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 17:54:53 compute-0 nova_compute[189296]: 2025-11-28 17:54:53.638 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 17:54:53 compute-0 nova_compute[189296]: 2025-11-28 17:54:53.638 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:54:53 compute-0 nova_compute[189296]: 2025-11-28 17:54:53.639 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:54:53 compute-0 nova_compute[189296]: 2025-11-28 17:54:53.639 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.627 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.628 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.628 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.662 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.980 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.981 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5706MB free_disk=72.43742752075195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.981 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:54:54 compute-0 nova_compute[189296]: 2025-11-28 17:54:54.981 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:54:55 compute-0 nova_compute[189296]: 2025-11-28 17:54:55.095 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 17:54:55 compute-0 nova_compute[189296]: 2025-11-28 17:54:55.096 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 17:54:55 compute-0 nova_compute[189296]: 2025-11-28 17:54:55.151 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 17:54:55 compute-0 nova_compute[189296]: 2025-11-28 17:54:55.172 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 17:54:55 compute-0 nova_compute[189296]: 2025-11-28 17:54:55.173 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 17:54:55 compute-0 nova_compute[189296]: 2025-11-28 17:54:55.173 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:54:56 compute-0 nova_compute[189296]: 2025-11-28 17:54:56.170 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:54:56 compute-0 nova_compute[189296]: 2025-11-28 17:54:56.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:54:58 compute-0 podman[238352]: 2025-11-28 17:54:58.01082791 +0000 UTC m=+0.076462245 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, version=9.6, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350)
Nov 28 17:54:58 compute-0 podman[238353]: 2025-11-28 17:54:58.013244528 +0000 UTC m=+0.073827882 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 28 17:54:58 compute-0 podman[238354]: 2025-11-28 17:54:58.042646459 +0000 UTC m=+0.099645377 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 17:54:59 compute-0 podman[203494]: time="2025-11-28T17:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:54:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 17:54:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4273 "" "Go-http-client/1.1"
Nov 28 17:55:01 compute-0 openstack_network_exporter[205632]: ERROR   17:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:55:01 compute-0 openstack_network_exporter[205632]: ERROR   17:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:55:01 compute-0 openstack_network_exporter[205632]: ERROR   17:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:55:01 compute-0 openstack_network_exporter[205632]: ERROR   17:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:55:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:55:01 compute-0 openstack_network_exporter[205632]: ERROR   17:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:55:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:55:04 compute-0 podman[238412]: 2025-11-28 17:55:04.007010042 +0000 UTC m=+0.069773925 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 28 17:55:04 compute-0 podman[238413]: 2025-11-28 17:55:04.009226235 +0000 UTC m=+0.072235374 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 17:55:07 compute-0 podman[238449]: 2025-11-28 17:55:07.014635095 +0000 UTC m=+0.067771467 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, name=ubi9, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 17:55:07 compute-0 podman[238448]: 2025-11-28 17:55:07.046639979 +0000 UTC m=+0.095126530 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 17:55:10 compute-0 podman[238490]: 2025-11-28 17:55:10.061160866 +0000 UTC m=+0.122527402 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 28 17:55:19 compute-0 podman[238515]: 2025-11-28 17:55:19.006221579 +0000 UTC m=+0.069375075 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 17:55:29 compute-0 podman[238541]: 2025-11-28 17:55:29.030674608 +0000 UTC m=+0.082509729 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm)
Nov 28 17:55:29 compute-0 podman[238542]: 2025-11-28 17:55:29.041161687 +0000 UTC m=+0.089127605 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 17:55:29 compute-0 podman[238540]: 2025-11-28 17:55:29.041349141 +0000 UTC m=+0.103267482 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Nov 28 17:55:29 compute-0 podman[203494]: time="2025-11-28T17:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:55:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 17:55:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4271 "" "Go-http-client/1.1"
Nov 28 17:55:31 compute-0 openstack_network_exporter[205632]: ERROR   17:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:55:31 compute-0 openstack_network_exporter[205632]: ERROR   17:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:55:31 compute-0 openstack_network_exporter[205632]: ERROR   17:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:55:31 compute-0 openstack_network_exporter[205632]: ERROR   17:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:55:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:55:31 compute-0 openstack_network_exporter[205632]: ERROR   17:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:55:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:55:35 compute-0 podman[238596]: 2025-11-28 17:55:35.028809556 +0000 UTC m=+0.092000255 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 17:55:35 compute-0 podman[238597]: 2025-11-28 17:55:35.03024545 +0000 UTC m=+0.092857416 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 28 17:55:37 compute-0 podman[238633]: 2025-11-28 17:55:37.99627225 +0000 UTC m=+0.061414466 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 17:55:37 compute-0 podman[238634]: 2025-11-28 17:55:37.999445095 +0000 UTC m=+0.060393221 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 17:55:40 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:55:40.145 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 17:55:40 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:55:40.145 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 17:55:40 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:55:40.146 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:55:41 compute-0 podman[238675]: 2025-11-28 17:55:41.018096811 +0000 UTC m=+0.085900409 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 28 17:55:50 compute-0 podman[238702]: 2025-11-28 17:55:50.01452346 +0000 UTC m=+0.070221826 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.122 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.122 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.158 189300 DEBUG nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.291 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.291 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.300 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.301 189300 INFO nova.compute.claims [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.427 189300 DEBUG nova.compute.provider_tree [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.441 189300 DEBUG nova.scheduler.client.report [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.468 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.468 189300 DEBUG nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.528 189300 DEBUG nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.528 189300 DEBUG nova.network.neutron [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.553 189300 INFO nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.613 189300 DEBUG nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.716 189300 DEBUG nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.718 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.718 189300 INFO nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Creating image(s)#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.719 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.719 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.720 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.721 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:50 compute-0 nova_compute[189296]: 2025-11-28 17:55:50.721 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:51 compute-0 nova_compute[189296]: 2025-11-28 17:55:51.197 189300 WARNING oslo_policy.policy [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 28 17:55:51 compute-0 nova_compute[189296]: 2025-11-28 17:55:51.198 189300 WARNING oslo_policy.policy [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 28 17:55:51 compute-0 nova_compute[189296]: 2025-11-28 17:55:51.811 189300 DEBUG nova.network.neutron [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Successfully created port: 0e0a227a-6212-4496-8954-fe210b763d0b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 17:55:51 compute-0 nova_compute[189296]: 2025-11-28 17:55:51.841 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:51 compute-0 nova_compute[189296]: 2025-11-28 17:55:51.901 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:51 compute-0 nova_compute[189296]: 2025-11-28 17:55:51.902 189300 DEBUG nova.virt.images [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] f54c2688-82d2-4cd3-8c3b-96e774162948 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 28 17:55:51 compute-0 nova_compute[189296]: 2025-11-28 17:55:51.903 189300 DEBUG nova.privsep.utils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 28 17:55:51 compute-0 nova_compute[189296]: 2025-11-28 17:55:51.904 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598.part /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.097 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598.part /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598.converted" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.109 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.164 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598.converted --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.166 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.178 189300 INFO oslo.privsep.daemon [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpnjj3vhia/privsep.sock']#033[00m
Nov 28 17:55:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:55:52.599 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:55:52.600 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:55:52.600 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.848 189300 INFO oslo.privsep.daemon [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.725 238742 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.729 238742 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.731 238742 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.732 238742 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238742#033[00m
Nov 28 17:55:52 compute-0 nova_compute[189296]: 2025-11-28 17:55:52.966 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.022 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.023 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.024 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.038 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.091 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.092 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598,backing_fmt=raw /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.129 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598,backing_fmt=raw /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.130 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.131 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.186 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.187 189300 DEBUG nova.virt.disk.api [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Checking if we can resize image /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.187 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.284 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.286 189300 DEBUG nova.virt.disk.api [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Cannot resize image /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.286 189300 DEBUG nova.objects.instance [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.301 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.302 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.302 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.304 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.305 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.305 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.338 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.339 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.378 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.379 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.391 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.453 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.454 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.454 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.464 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.552 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.553 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.593 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.595 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.596 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.655 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.656 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.656 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Ensure instance console log exists: /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.657 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.657 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.657 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.891 189300 DEBUG nova.network.neutron [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Successfully updated port: 0e0a227a-6212-4496-8954-fe210b763d0b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.912 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.912 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 17:55:53 compute-0 nova_compute[189296]: 2025-11-28 17:55:53.912 189300 DEBUG nova.network.neutron [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 17:55:54 compute-0 nova_compute[189296]: 2025-11-28 17:55:54.169 189300 DEBUG nova.network.neutron [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 17:55:54 compute-0 nova_compute[189296]: 2025-11-28 17:55:54.398 189300 DEBUG nova.compute.manager [req-1a5d154e-5418-4a2b-99e1-e49180c32b16 req-7e7a7646-b8ef-465a-9df0-851eef86d101 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Received event network-changed-0e0a227a-6212-4496-8954-fe210b763d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 17:55:54 compute-0 nova_compute[189296]: 2025-11-28 17:55:54.399 189300 DEBUG nova.compute.manager [req-1a5d154e-5418-4a2b-99e1-e49180c32b16 req-7e7a7646-b8ef-465a-9df0-851eef86d101 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Refreshing instance network info cache due to event network-changed-0e0a227a-6212-4496-8954-fe210b763d0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 17:55:54 compute-0 nova_compute[189296]: 2025-11-28 17:55:54.400 189300 DEBUG oslo_concurrency.lockutils [req-1a5d154e-5418-4a2b-99e1-e49180c32b16 req-7e7a7646-b8ef-465a-9df0-851eef86d101 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 17:55:54 compute-0 nova_compute[189296]: 2025-11-28 17:55:54.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:55:54 compute-0 nova_compute[189296]: 2025-11-28 17:55:54.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 17:55:54 compute-0 nova_compute[189296]: 2025-11-28 17:55:54.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 17:55:54 compute-0 nova_compute[189296]: 2025-11-28 17:55:54.644 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 28 17:55:54 compute-0 nova_compute[189296]: 2025-11-28 17:55:54.646 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 17:55:55 compute-0 nova_compute[189296]: 2025-11-28 17:55:55.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.646 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.646 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.647 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.647 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.962 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.963 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5626MB free_disk=72.4072151184082GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.964 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:56 compute-0 nova_compute[189296]: 2025-11-28 17:55:56.964 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.038 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.039 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.039 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.080 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.110 189300 ERROR nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [req-6853706e-8c22-4da9-bd26-d43c0bae785f] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID d10a9930-4504-4222-97f7-6727a5a2d43b.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-6853706e-8c22-4da9-bd26-d43c0bae785f"}]}#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.131 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing inventories for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.154 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating ProviderTree inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.156 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.174 189300 DEBUG nova.network.neutron [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.177 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing aggregate associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.192 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.192 189300 DEBUG nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Instance network_info: |[{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.193 189300 DEBUG oslo_concurrency.lockutils [req-1a5d154e-5418-4a2b-99e1-e49180c32b16 req-7e7a7646-b8ef-465a-9df0-851eef86d101 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.193 189300 DEBUG nova.network.neutron [req-1a5d154e-5418-4a2b-99e1-e49180c32b16 req-7e7a7646-b8ef-465a-9df0-851eef86d101 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Refreshing network info cache for port 0e0a227a-6212-4496-8954-fe210b763d0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.196 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Start _get_guest_xml network_info=[{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T17:54:35Z,direct_url=<?>,disk_format='qcow2',id=f54c2688-82d2-4cd3-8c3b-96e774162948,min_disk=0,min_ram=0,name='cirros',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T17:54:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}], 'ephemerals': [{'device_type': 'disk', 'guest_format': None, 'size': 1, 'encryption_options': None, 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.204 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing trait associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, traits: HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.215 189300 WARNING nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.220 189300 DEBUG nova.virt.libvirt.host [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.221 189300 DEBUG nova.virt.libvirt.host [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.227 189300 DEBUG nova.virt.libvirt.host [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.228 189300 DEBUG nova.virt.libvirt.host [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.229 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.229 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T17:54:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='e125fa74-9e9f-47dc-8c8e-699980f99f10',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T17:54:35Z,direct_url=<?>,disk_format='qcow2',id=f54c2688-82d2-4cd3-8c3b-96e774162948,min_disk=0,min_ram=0,name='cirros',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T17:54:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.230 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.230 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.230 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.231 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.231 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.231 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.232 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.232 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.232 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.233 189300 DEBUG nova.virt.hardware [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.237 189300 DEBUG nova.privsep.utils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.238 189300 DEBUG nova.virt.libvirt.vif [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T17:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-a3s2pmkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T17:55:50Z,user_data=None,user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=5d10f9fc-89ea-4059-8532-7e0aec0791d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.239 189300 DEBUG nova.network.os_vif_util [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.240 189300 DEBUG nova.network.os_vif_util [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:42:00,bridge_name='br-int',has_traffic_filtering=True,id=0e0a227a-6212-4496-8954-fe210b763d0b,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e0a227a-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.242 189300 DEBUG nova.objects.instance [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.252 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.293 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] End _get_guest_xml xml=<domain type="kvm">
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <uuid>5d10f9fc-89ea-4059-8532-7e0aec0791d6</uuid>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <name>instance-00000001</name>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <memory>524288</memory>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <metadata>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <nova:name>test_0</nova:name>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 17:55:57</nova:creationTime>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <nova:flavor name="m1.small">
Nov 28 17:55:57 compute-0 nova_compute[189296]:        <nova:memory>512</nova:memory>
Nov 28 17:55:57 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 17:55:57 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 17:55:57 compute-0 nova_compute[189296]:        <nova:ephemeral>1</nova:ephemeral>
Nov 28 17:55:57 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 17:55:57 compute-0 nova_compute[189296]:        <nova:user uuid="6a35450c34a344b1a4e63aae1be2b971">admin</nova:user>
Nov 28 17:55:57 compute-0 nova_compute[189296]:        <nova:project uuid="79ee04b003ca4eb8a045699c7852a8b0">admin</nova:project>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="f54c2688-82d2-4cd3-8c3b-96e774162948"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 17:55:57 compute-0 nova_compute[189296]:        <nova:port uuid="0e0a227a-6212-4496-8954-fe210b763d0b">
Nov 28 17:55:57 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="192.168.0.67" ipVersion="4"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  </metadata>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <system>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <entry name="serial">5d10f9fc-89ea-4059-8532-7e0aec0791d6</entry>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <entry name="uuid">5d10f9fc-89ea-4059-8532-7e0aec0791d6</entry>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    </system>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <os>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  </os>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <features>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <apic/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  </features>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  </clock>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  </cpu>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  <devices>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    </disk>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <target dev="vdb" bus="virtio"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    </disk>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.config"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    </disk>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:28:42:00"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <target dev="tap0e0a227a-62"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    </interface>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/console.log" append="off"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    </serial>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <video>
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    </video>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    </rng>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 17:55:57 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 17:55:57 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 17:55:57 compute-0 nova_compute[189296]:  </devices>
Nov 28 17:55:57 compute-0 nova_compute[189296]: </domain>
Nov 28 17:55:57 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.294 189300 DEBUG nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Preparing to wait for external event network-vif-plugged-0e0a227a-6212-4496-8954-fe210b763d0b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.294 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.294 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.294 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.295 189300 DEBUG nova.virt.libvirt.vif [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T17:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-a3s2pmkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T17:55:50Z,user_data=None,user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=5d10f9fc-89ea-4059-8532-7e0aec0791d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.295 189300 DEBUG nova.network.os_vif_util [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.296 189300 DEBUG nova.network.os_vif_util [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:42:00,bridge_name='br-int',has_traffic_filtering=True,id=0e0a227a-6212-4496-8954-fe210b763d0b,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e0a227a-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.296 189300 DEBUG os_vif [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:42:00,bridge_name='br-int',has_traffic_filtering=True,id=0e0a227a-6212-4496-8954-fe210b763d0b,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e0a227a-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.331 189300 DEBUG ovsdbapp.backend.ovs_idl [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.332 189300 DEBUG ovsdbapp.backend.ovs_idl [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.332 189300 DEBUG ovsdbapp.backend.ovs_idl [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.332 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.333 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.333 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.334 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.338 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.340 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.350 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.350 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.350 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.351 189300 INFO oslo.privsep.daemon [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpdj89vlwz/privsep.sock']#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.365 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updated inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.366 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating resource provider d10a9930-4504-4222-97f7-6727a5a2d43b generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.366 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.390 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.391 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:55:57 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.910 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.068 189300 INFO oslo.privsep.daemon [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.933 238779 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.940 238779 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.944 238779 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:57.944 238779 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238779#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.390 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.410 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.410 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e0a227a-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.411 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e0a227a-62, col_values=(('external_ids', {'iface-id': '0e0a227a-6212-4496-8954-fe210b763d0b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:42:00', 'vm-uuid': '5d10f9fc-89ea-4059-8532-7e0aec0791d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.412 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:55:58 compute-0 NetworkManager[56307]: <info>  [1764352558.4137] manager: (tap0e0a227a-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.419 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.421 189300 INFO os_vif [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:42:00,bridge_name='br-int',has_traffic_filtering=True,id=0e0a227a-6212-4496-8954-fe210b763d0b,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e0a227a-62')#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.473 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.473 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.473 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.473 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No VIF found with MAC fa:16:3e:28:42:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.474 189300 INFO nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Using config drive#033[00m
Nov 28 17:55:58 compute-0 nova_compute[189296]: 2025-11-28 17:55:58.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:55:59 compute-0 nova_compute[189296]: 2025-11-28 17:55:59.536 189300 DEBUG nova.network.neutron [req-1a5d154e-5418-4a2b-99e1-e49180c32b16 req-7e7a7646-b8ef-465a-9df0-851eef86d101 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updated VIF entry in instance network info cache for port 0e0a227a-6212-4496-8954-fe210b763d0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 17:55:59 compute-0 nova_compute[189296]: 2025-11-28 17:55:59.537 189300 DEBUG nova.network.neutron [req-1a5d154e-5418-4a2b-99e1-e49180c32b16 req-7e7a7646-b8ef-465a-9df0-851eef86d101 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 17:55:59 compute-0 nova_compute[189296]: 2025-11-28 17:55:59.551 189300 DEBUG oslo_concurrency.lockutils [req-1a5d154e-5418-4a2b-99e1-e49180c32b16 req-7e7a7646-b8ef-465a-9df0-851eef86d101 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 17:55:59 compute-0 podman[203494]: time="2025-11-28T17:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:55:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 17:55:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4284 "" "Go-http-client/1.1"
Nov 28 17:56:00 compute-0 podman[238786]: 2025-11-28 17:56:00.074681703 +0000 UTC m=+0.117953330 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true)
Nov 28 17:56:00 compute-0 podman[238785]: 2025-11-28 17:56:00.088592985 +0000 UTC m=+0.131654627 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 17:56:00 compute-0 podman[238787]: 2025-11-28 17:56:00.10507022 +0000 UTC m=+0.134806295 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.190 189300 INFO nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Creating config drive at /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.config#033[00m
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.194 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqt0lywww execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.320 189300 DEBUG oslo_concurrency.processutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqt0lywww" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:56:00 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 28 17:56:00 compute-0 NetworkManager[56307]: <info>  [1764352560.4091] manager: (tap0e0a227a-62): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Nov 28 17:56:00 compute-0 kernel: tap0e0a227a-62: entered promiscuous mode
Nov 28 17:56:00 compute-0 ovn_controller[97771]: 2025-11-28T17:56:00Z|00027|binding|INFO|Claiming lport 0e0a227a-6212-4496-8954-fe210b763d0b for this chassis.
Nov 28 17:56:00 compute-0 ovn_controller[97771]: 2025-11-28T17:56:00Z|00028|binding|INFO|0e0a227a-6212-4496-8954-fe210b763d0b: Claiming fa:16:3e:28:42:00 192.168.0.67
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.412 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.419 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:00.433 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:42:00 192.168.0.67'], port_security=['fa:16:3e:28:42:00 192.168.0.67'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.67/24', 'neutron:device_id': '5d10f9fc-89ea-4059-8532-7e0aec0791d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79ee04b003ca4eb8a045699c7852a8b0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a309e23b-efb6-4377-8050-5a658324ee07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37710b57-0bdd-4c1a-aa8d-366aa83fbf51, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=0e0a227a-6212-4496-8954-fe210b763d0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 17:56:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:00.434 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 0e0a227a-6212-4496-8954-fe210b763d0b in datapath 5cc11a5f-7338-49fd-ba02-2db7ff676c4f bound to our chassis#033[00m
Nov 28 17:56:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:00.437 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc11a5f-7338-49fd-ba02-2db7ff676c4f#033[00m
Nov 28 17:56:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:00.438 106624 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpcu4ymwma/privsep.sock']#033[00m
Nov 28 17:56:00 compute-0 systemd-udevd[238866]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 17:56:00 compute-0 NetworkManager[56307]: <info>  [1764352560.4816] device (tap0e0a227a-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 17:56:00 compute-0 NetworkManager[56307]: <info>  [1764352560.4865] device (tap0e0a227a-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 17:56:00 compute-0 systemd-machined[155703]: New machine qemu-1-instance-00000001.
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.507 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:00 compute-0 ovn_controller[97771]: 2025-11-28T17:56:00Z|00029|binding|INFO|Setting lport 0e0a227a-6212-4496-8954-fe210b763d0b ovn-installed in OVS
Nov 28 17:56:00 compute-0 ovn_controller[97771]: 2025-11-28T17:56:00Z|00030|binding|INFO|Setting lport 0e0a227a-6212-4496-8954-fe210b763d0b up in Southbound
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.516 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:00 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 28 17:56:00 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 28 17:56:00 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.933 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352560.9333217, 5d10f9fc-89ea-4059-8532-7e0aec0791d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.935 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] VM Started (Lifecycle Event)#033[00m
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.975 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.981 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352560.9334457, 5d10f9fc-89ea-4059-8532-7e0aec0791d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 17:56:00 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.982 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] VM Paused (Lifecycle Event)#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:00.999 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.005 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.021 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 17:56:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:01.101 106624 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 28 17:56:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:01.101 106624 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpcu4ymwma/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 28 17:56:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:00.986 238909 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 28 17:56:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:00.990 238909 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 28 17:56:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:00.992 238909 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 28 17:56:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:00.993 238909 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238909#033[00m
Nov 28 17:56:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:01.104 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[63faf07d-7b1d-45df-9921-d17d6813708c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:01 compute-0 openstack_network_exporter[205632]: ERROR   17:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:56:01 compute-0 openstack_network_exporter[205632]: ERROR   17:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:56:01 compute-0 openstack_network_exporter[205632]: ERROR   17:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:56:01 compute-0 openstack_network_exporter[205632]: ERROR   17:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:56:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:56:01 compute-0 openstack_network_exporter[205632]: ERROR   17:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:56:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:56:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:01.606 238909 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:56:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:01.606 238909 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:56:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:01.606 238909 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.746 189300 DEBUG nova.compute.manager [req-fedbc0da-7145-4f5d-9eeb-4aacbdf14f81 req-fcc9ece7-4cf2-4b20-a7ab-54975a71c2aa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Received event network-vif-plugged-0e0a227a-6212-4496-8954-fe210b763d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.747 189300 DEBUG oslo_concurrency.lockutils [req-fedbc0da-7145-4f5d-9eeb-4aacbdf14f81 req-fcc9ece7-4cf2-4b20-a7ab-54975a71c2aa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.748 189300 DEBUG oslo_concurrency.lockutils [req-fedbc0da-7145-4f5d-9eeb-4aacbdf14f81 req-fcc9ece7-4cf2-4b20-a7ab-54975a71c2aa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.749 189300 DEBUG oslo_concurrency.lockutils [req-fedbc0da-7145-4f5d-9eeb-4aacbdf14f81 req-fcc9ece7-4cf2-4b20-a7ab-54975a71c2aa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.749 189300 DEBUG nova.compute.manager [req-fedbc0da-7145-4f5d-9eeb-4aacbdf14f81 req-fcc9ece7-4cf2-4b20-a7ab-54975a71c2aa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Processing event network-vif-plugged-0e0a227a-6212-4496-8954-fe210b763d0b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.751 189300 DEBUG nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.769 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352561.7684793, 5d10f9fc-89ea-4059-8532-7e0aec0791d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.770 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] VM Resumed (Lifecycle Event)#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.786 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.794 189300 INFO nova.virt.libvirt.driver [-] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Instance spawned successfully.#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.795 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.814 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.819 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.851 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.861 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.861 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.862 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.863 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.863 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:56:01 compute-0 nova_compute[189296]: 2025-11-28 17:56:01.864 189300 DEBUG nova.virt.libvirt.driver [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:56:02 compute-0 nova_compute[189296]: 2025-11-28 17:56:02.001 189300 INFO nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Took 11.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 17:56:02 compute-0 nova_compute[189296]: 2025-11-28 17:56:02.002 189300 DEBUG nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 17:56:02 compute-0 nova_compute[189296]: 2025-11-28 17:56:02.108 189300 INFO nova.compute.manager [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Took 11.85 seconds to build instance.#033[00m
Nov 28 17:56:02 compute-0 nova_compute[189296]: 2025-11-28 17:56:02.163 189300 DEBUG oslo_concurrency.lockutils [None req-73007f09-9573-400c-8320-f34ec0986b48 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.179 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a1052a-4a93-4392-b537-55a8aecc215c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.181 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5cc11a5f-71 in ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.183 238909 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5cc11a5f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.184 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[83e72499-3794-4421-a32b-6e50badac7d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.187 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[26b8a9b7-26eb-4e73-b9ad-3a8fa59837a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.216 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac29212-d4a7-4577-9bf4-4616919fdefc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.243 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[506dc11c-41e8-4287-9233-cca0d2801426]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.245 106624 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpezdfd0sq/privsep.sock']#033[00m
Nov 28 17:56:02 compute-0 nova_compute[189296]: 2025-11-28 17:56:02.913 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.916 106624 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.917 106624 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpezdfd0sq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.792 238923 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.796 238923 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.798 238923 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.799 238923 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238923#033[00m
Nov 28 17:56:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:02.920 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[fa43119e-6b7c-433b-924a-952384c775ba]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:03.406 238923 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:56:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:03.406 238923 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:56:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:03.406 238923 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:56:03 compute-0 nova_compute[189296]: 2025-11-28 17:56:03.414 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:03 compute-0 nova_compute[189296]: 2025-11-28 17:56:03.819 189300 DEBUG nova.compute.manager [req-ef41bac7-096c-49d7-ad23-82e1a5590bac req-a365718e-5c55-4100-b382-811b04ca9123 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Received event network-vif-plugged-0e0a227a-6212-4496-8954-fe210b763d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 17:56:03 compute-0 nova_compute[189296]: 2025-11-28 17:56:03.820 189300 DEBUG oslo_concurrency.lockutils [req-ef41bac7-096c-49d7-ad23-82e1a5590bac req-a365718e-5c55-4100-b382-811b04ca9123 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:56:03 compute-0 nova_compute[189296]: 2025-11-28 17:56:03.820 189300 DEBUG oslo_concurrency.lockutils [req-ef41bac7-096c-49d7-ad23-82e1a5590bac req-a365718e-5c55-4100-b382-811b04ca9123 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:56:03 compute-0 nova_compute[189296]: 2025-11-28 17:56:03.821 189300 DEBUG oslo_concurrency.lockutils [req-ef41bac7-096c-49d7-ad23-82e1a5590bac req-a365718e-5c55-4100-b382-811b04ca9123 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:56:03 compute-0 nova_compute[189296]: 2025-11-28 17:56:03.821 189300 DEBUG nova.compute.manager [req-ef41bac7-096c-49d7-ad23-82e1a5590bac req-a365718e-5c55-4100-b382-811b04ca9123 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] No waiting events found dispatching network-vif-plugged-0e0a227a-6212-4496-8954-fe210b763d0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 17:56:03 compute-0 nova_compute[189296]: 2025-11-28 17:56:03.821 189300 WARNING nova.compute.manager [req-ef41bac7-096c-49d7-ad23-82e1a5590bac req-a365718e-5c55-4100-b382-811b04ca9123 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Received unexpected event network-vif-plugged-0e0a227a-6212-4496-8954-fe210b763d0b for instance with vm_state active and task_state None.#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.019 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[83350879-bf86-4547-8c29-9cc7aaccd1f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 NetworkManager[56307]: <info>  [1764352564.0445] manager: (tap5cc11a5f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.043 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[2858de99-b6ed-42ae-a416-35db89c878f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 systemd-udevd[238935]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.087 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[80f81187-f3d9-4f23-9ce8-16558d969aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.090 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[ea12e380-0ef1-4768-b90b-a4189a66b752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 NetworkManager[56307]: <info>  [1764352564.1148] device (tap5cc11a5f-70): carrier: link connected
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.124 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[19f5e53f-d83d-488f-b030-d755380a2a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.145 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5ad46f-4f9f-4cf4-9fc0-d271359f3865]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc11a5f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:38:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370971, 'reachable_time': 20370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238953, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.163 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[5ddd4a46-29e8-4082-81e9-435505b408cc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:385b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370971, 'tstamp': 370971}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238954, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.181 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[6884c7e8-1430-44e7-a2f9-95aa9d88ed8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc11a5f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:38:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370971, 'reachable_time': 20370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238955, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.212 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[12d84f1b-0312-419f-bdfc-b665efd1a6af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.276 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[fec00ad9-ffb9-4042-b5ff-04549d14d6e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.278 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc11a5f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.278 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.279 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc11a5f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:56:04 compute-0 kernel: tap5cc11a5f-70: entered promiscuous mode
Nov 28 17:56:04 compute-0 NetworkManager[56307]: <info>  [1764352564.2820] manager: (tap5cc11a5f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 28 17:56:04 compute-0 nova_compute[189296]: 2025-11-28 17:56:04.281 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.287 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc11a5f-70, col_values=(('external_ids', {'iface-id': '467e3797-177d-4174-b963-0efbd15595b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:56:04 compute-0 ovn_controller[97771]: 2025-11-28T17:56:04Z|00031|binding|INFO|Releasing lport 467e3797-177d-4174-b963-0efbd15595b9 from this chassis (sb_readonly=0)
Nov 28 17:56:04 compute-0 nova_compute[189296]: 2025-11-28 17:56:04.289 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:04 compute-0 nova_compute[189296]: 2025-11-28 17:56:04.290 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.291 106624 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5cc11a5f-7338-49fd-ba02-2db7ff676c4f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5cc11a5f-7338-49fd-ba02-2db7ff676c4f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.292 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[06dc3faa-522e-416e-acff-58a24117b921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.294 106624 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: global
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    log         /dev/log local0 debug
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    log-tag     haproxy-metadata-proxy-5cc11a5f-7338-49fd-ba02-2db7ff676c4f
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    user        root
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    group       root
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    maxconn     1024
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    pidfile     /var/lib/neutron/external/pids/5cc11a5f-7338-49fd-ba02-2db7ff676c4f.pid.haproxy
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    daemon
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: defaults
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    log global
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    mode http
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    option httplog
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    option dontlognull
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    option http-server-close
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    option forwardfor
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    retries                 3
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    timeout http-request    30s
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    timeout connect         30s
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    timeout client          32s
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    timeout server          32s
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    timeout http-keep-alive 30s
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: listen listener
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    bind 169.254.169.254:80
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    server metadata /var/lib/neutron/metadata_proxy
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]:    http-request add-header X-OVN-Network-ID 5cc11a5f-7338-49fd-ba02-2db7ff676c4f
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 28 17:56:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:04.295 106624 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'env', 'PROCESS_TAG=haproxy-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5cc11a5f-7338-49fd-ba02-2db7ff676c4f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 28 17:56:04 compute-0 nova_compute[189296]: 2025-11-28 17:56:04.305 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:04 compute-0 podman[238986]: 2025-11-28 17:56:04.750449427 +0000 UTC m=+0.071560980 container create 7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 28 17:56:04 compute-0 systemd[1]: Started libpod-conmon-7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb.scope.
Nov 28 17:56:04 compute-0 podman[238986]: 2025-11-28 17:56:04.711363046 +0000 UTC m=+0.032474609 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 17:56:04 compute-0 systemd[1]: Started libcrun container.
Nov 28 17:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9de74ec40611e6e4d55216482e0f148c142adf9c9083d7b2b1f5ff871de056/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 17:56:04 compute-0 podman[238986]: 2025-11-28 17:56:04.875530442 +0000 UTC m=+0.196642005 container init 7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 28 17:56:04 compute-0 podman[238986]: 2025-11-28 17:56:04.885253451 +0000 UTC m=+0.206364984 container start 7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 28 17:56:04 compute-0 neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f[239001]: [NOTICE]   (239006) : New worker (239008) forked
Nov 28 17:56:04 compute-0 neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f[239001]: [NOTICE]   (239006) : Loading success.
Nov 28 17:56:06 compute-0 podman[239018]: 2025-11-28 17:56:06.01690664 +0000 UTC m=+0.075319382 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 17:56:06 compute-0 podman[239017]: 2025-11-28 17:56:06.018392367 +0000 UTC m=+0.080004338 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 17:56:07 compute-0 nova_compute[189296]: 2025-11-28 17:56:07.918 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:08 compute-0 nova_compute[189296]: 2025-11-28 17:56:08.416 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:08 compute-0 podman[239055]: 2025-11-28 17:56:08.996311976 +0000 UTC m=+0.057371611 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 17:56:09 compute-0 podman[239056]: 2025-11-28 17:56:09.009608442 +0000 UTC m=+0.065760237 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, config_id=edpm, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 28 17:56:12 compute-0 podman[239097]: 2025-11-28 17:56:12.04088917 +0000 UTC m=+0.106900408 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 17:56:12 compute-0 nova_compute[189296]: 2025-11-28 17:56:12.921 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:13 compute-0 nova_compute[189296]: 2025-11-28 17:56:13.418 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:16 compute-0 ovn_controller[97771]: 2025-11-28T17:56:16Z|00032|binding|INFO|Releasing lport 467e3797-177d-4174-b963-0efbd15595b9 from this chassis (sb_readonly=0)
Nov 28 17:56:16 compute-0 nova_compute[189296]: 2025-11-28 17:56:16.285 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:16 compute-0 NetworkManager[56307]: <info>  [1764352576.3026] manager: (patch-br-int-to-provnet-564e20d3-e524-48c8-993a-ae41282beadd): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 28 17:56:16 compute-0 NetworkManager[56307]: <info>  [1764352576.3033] device (patch-br-int-to-provnet-564e20d3-e524-48c8-993a-ae41282beadd)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 17:56:16 compute-0 NetworkManager[56307]: <info>  [1764352576.3044] manager: (patch-provnet-564e20d3-e524-48c8-993a-ae41282beadd-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 28 17:56:16 compute-0 NetworkManager[56307]: <info>  [1764352576.3047] device (patch-provnet-564e20d3-e524-48c8-993a-ae41282beadd-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 28 17:56:16 compute-0 NetworkManager[56307]: <info>  [1764352576.3054] manager: (patch-provnet-564e20d3-e524-48c8-993a-ae41282beadd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 28 17:56:16 compute-0 NetworkManager[56307]: <info>  [1764352576.3059] manager: (patch-br-int-to-provnet-564e20d3-e524-48c8-993a-ae41282beadd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 28 17:56:16 compute-0 NetworkManager[56307]: <info>  [1764352576.3063] device (patch-br-int-to-provnet-564e20d3-e524-48c8-993a-ae41282beadd)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 28 17:56:16 compute-0 NetworkManager[56307]: <info>  [1764352576.3065] device (patch-provnet-564e20d3-e524-48c8-993a-ae41282beadd-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 28 17:56:16 compute-0 ovn_controller[97771]: 2025-11-28T17:56:16Z|00033|binding|INFO|Releasing lport 467e3797-177d-4174-b963-0efbd15595b9 from this chassis (sb_readonly=0)
Nov 28 17:56:16 compute-0 nova_compute[189296]: 2025-11-28 17:56:16.321 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:16 compute-0 nova_compute[189296]: 2025-11-28 17:56:16.330 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:16 compute-0 nova_compute[189296]: 2025-11-28 17:56:16.657 189300 DEBUG nova.compute.manager [req-c27fbe6d-e83a-4cbd-81c2-361eb21f68fc req-f26c8533-0920-41db-ad06-67ab066d64c4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Received event network-changed-0e0a227a-6212-4496-8954-fe210b763d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 17:56:16 compute-0 nova_compute[189296]: 2025-11-28 17:56:16.658 189300 DEBUG nova.compute.manager [req-c27fbe6d-e83a-4cbd-81c2-361eb21f68fc req-f26c8533-0920-41db-ad06-67ab066d64c4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Refreshing instance network info cache due to event network-changed-0e0a227a-6212-4496-8954-fe210b763d0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 17:56:16 compute-0 nova_compute[189296]: 2025-11-28 17:56:16.658 189300 DEBUG oslo_concurrency.lockutils [req-c27fbe6d-e83a-4cbd-81c2-361eb21f68fc req-f26c8533-0920-41db-ad06-67ab066d64c4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 17:56:16 compute-0 nova_compute[189296]: 2025-11-28 17:56:16.659 189300 DEBUG oslo_concurrency.lockutils [req-c27fbe6d-e83a-4cbd-81c2-361eb21f68fc req-f26c8533-0920-41db-ad06-67ab066d64c4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 17:56:16 compute-0 nova_compute[189296]: 2025-11-28 17:56:16.659 189300 DEBUG nova.network.neutron [req-c27fbe6d-e83a-4cbd-81c2-361eb21f68fc req-f26c8533-0920-41db-ad06-67ab066d64c4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Refreshing network info cache for port 0e0a227a-6212-4496-8954-fe210b763d0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 17:56:17 compute-0 nova_compute[189296]: 2025-11-28 17:56:17.923 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:18 compute-0 nova_compute[189296]: 2025-11-28 17:56:18.374 189300 DEBUG nova.network.neutron [req-c27fbe6d-e83a-4cbd-81c2-361eb21f68fc req-f26c8533-0920-41db-ad06-67ab066d64c4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updated VIF entry in instance network info cache for port 0e0a227a-6212-4496-8954-fe210b763d0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 17:56:18 compute-0 nova_compute[189296]: 2025-11-28 17:56:18.374 189300 DEBUG nova.network.neutron [req-c27fbe6d-e83a-4cbd-81c2-361eb21f68fc req-f26c8533-0920-41db-ad06-67ab066d64c4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 17:56:18 compute-0 nova_compute[189296]: 2025-11-28 17:56:18.397 189300 DEBUG oslo_concurrency.lockutils [req-c27fbe6d-e83a-4cbd-81c2-361eb21f68fc req-f26c8533-0920-41db-ad06-67ab066d64c4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 17:56:18 compute-0 nova_compute[189296]: 2025-11-28 17:56:18.421 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:21 compute-0 podman[239125]: 2025-11-28 17:56:21.000175966 +0000 UTC m=+0.062176649 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 17:56:22 compute-0 nova_compute[189296]: 2025-11-28 17:56:22.925 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:23 compute-0 nova_compute[189296]: 2025-11-28 17:56:23.423 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:27 compute-0 nova_compute[189296]: 2025-11-28 17:56:27.927 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:28 compute-0 nova_compute[189296]: 2025-11-28 17:56:28.426 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:29 compute-0 podman[203494]: time="2025-11-28T17:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:56:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 17:56:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4744 "" "Go-http-client/1.1"
Nov 28 17:56:31 compute-0 podman[239148]: 2025-11-28 17:56:31.004297167 +0000 UTC m=+0.065469480 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 17:56:31 compute-0 podman[239149]: 2025-11-28 17:56:31.020863935 +0000 UTC m=+0.078187883 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 28 17:56:31 compute-0 podman[239147]: 2025-11-28 17:56:31.033883925 +0000 UTC m=+0.099308253 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, version=9.6, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 17:56:31 compute-0 openstack_network_exporter[205632]: ERROR   17:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:56:31 compute-0 openstack_network_exporter[205632]: ERROR   17:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:56:31 compute-0 openstack_network_exporter[205632]: ERROR   17:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:56:31 compute-0 openstack_network_exporter[205632]: ERROR   17:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:56:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:56:31 compute-0 openstack_network_exporter[205632]: ERROR   17:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:56:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:56:32 compute-0 nova_compute[189296]: 2025-11-28 17:56:32.928 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:33 compute-0 nova_compute[189296]: 2025-11-28 17:56:33.429 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:33 compute-0 ovn_controller[97771]: 2025-11-28T17:56:33Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:42:00 192.168.0.67
Nov 28 17:56:33 compute-0 ovn_controller[97771]: 2025-11-28T17:56:33Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:42:00 192.168.0.67
Nov 28 17:56:37 compute-0 podman[239212]: 2025-11-28 17:56:37.009682569 +0000 UTC m=+0.063297967 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 28 17:56:37 compute-0 podman[239211]: 2025-11-28 17:56:37.033538256 +0000 UTC m=+0.078552842 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 28 17:56:37 compute-0 nova_compute[189296]: 2025-11-28 17:56:37.931 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:38 compute-0 nova_compute[189296]: 2025-11-28 17:56:38.431 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:40 compute-0 podman[239250]: 2025-11-28 17:56:40.013842553 +0000 UTC m=+0.073114988 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 17:56:40 compute-0 podman[239251]: 2025-11-28 17:56:40.047724056 +0000 UTC m=+0.099396215 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64)
Nov 28 17:56:42 compute-0 nova_compute[189296]: 2025-11-28 17:56:42.933 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:43 compute-0 podman[239291]: 2025-11-28 17:56:43.040730004 +0000 UTC m=+0.106317114 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 17:56:43 compute-0 nova_compute[189296]: 2025-11-28 17:56:43.434 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:46 compute-0 ovn_controller[97771]: 2025-11-28T17:56:46Z|00034|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 28 17:56:47 compute-0 nova_compute[189296]: 2025-11-28 17:56:47.936 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:48 compute-0 nova_compute[189296]: 2025-11-28 17:56:48.437 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.975 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.976 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.976 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.977 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:56:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:51.983 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 17:56:52 compute-0 podman[239319]: 2025-11-28 17:56:52.015827726 +0000 UTC m=+0.066409343 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 17:56:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:52.366 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/5d10f9fc-89ea-4059-8532-7e0aec0791d6 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 17:56:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:52.600 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:56:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:52.601 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:56:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:56:52.601 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:56:52 compute-0 nova_compute[189296]: 2025-11-28 17:56:52.938 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.026 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Fri, 28 Nov 2025 17:56:52 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-98c5407c-349b-4f52-99cd-19fae727692c x-openstack-request-id: req-98c5407c-349b-4f52-99cd-19fae727692c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.027 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "5d10f9fc-89ea-4059-8532-7e0aec0791d6", "name": "test_0", "status": "ACTIVE", "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "user_id": "6a35450c34a344b1a4e63aae1be2b971", "metadata": {}, "hostId": "db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651", "image": {"id": "f54c2688-82d2-4cd3-8c3b-96e774162948", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/f54c2688-82d2-4cd3-8c3b-96e774162948"}]}, "flavor": {"id": "e125fa74-9e9f-47dc-8c8e-699980f99f10", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e125fa74-9e9f-47dc-8c8e-699980f99f10"}]}, "created": "2025-11-28T17:55:48Z", "updated": "2025-11-28T17:56:02Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.67", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:28:42:00"}, {"version": 4, "addr": "192.168.122.235", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:28:42:00"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/5d10f9fc-89ea-4059-8532-7e0aec0791d6"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/5d10f9fc-89ea-4059-8532-7e0aec0791d6"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-28T17:56:02.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.027 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/5d10f9fc-89ea-4059-8532-7e0aec0791d6 used request id req-98c5407c-349b-4f52-99cd-19fae727692c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.029 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d10f9fc-89ea-4059-8532-7e0aec0791d6', 'name': 'test_0', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.029 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.029 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.029 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.030 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T17:56:53.029678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.055 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.056 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.057 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.057 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.058 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.058 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.058 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.058 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.058 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T17:56:53.058569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.126 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.126 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.127 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.127 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.128 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.128 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.128 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.128 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.128 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.128 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 284678818 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T17:56:53.128481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.129 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 69824352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.129 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 37055244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.130 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.130 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.131 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.131 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.131 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.131 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T17:56:53.131352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.137 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 5d10f9fc-89ea-4059-8532-7e0aec0791d6 / tap0e0a227a-62 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.137 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.138 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.138 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.138 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.138 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.138 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T17:56:53.138855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.160 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/memory.usage volume: 49.5390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.161 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.161 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.161 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.161 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.162 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.162 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T17:56:53.161991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.162 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.162 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.163 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.163 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.163 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.163 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.163 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.163 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.163 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.163 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T17:56:53.163559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.164 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.164 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.164 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.164 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.165 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.165 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.165 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.165 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.165 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.165 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.165 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.165 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T17:56:53.165208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.166 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.166 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.166 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 632410012 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.166 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 6041958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T17:56:53.166173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.166 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.167 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.167 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.167 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.167 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.167 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.167 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.167 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T17:56:53.167713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.168 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.168 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.168 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.168 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.168 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.168 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.169 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.169 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.169 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.169 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.169 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.169 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.169 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.169 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.170 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.170 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.170 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T17:56:53.169094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.170 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.170 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T17:56:53.170061) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.170 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.170 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.171 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.171 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.171 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.171 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/cpu volume: 32000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.171 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.171 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.171 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.172 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.172 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.172 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.172 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T17:56:53.171248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.172 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.173 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.173 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.173 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-28T17:56:53.172312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.173 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.174 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.174 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T17:56:53.174062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.174 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.174 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.174 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.174 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.174 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.175 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.175 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T17:56:53.175029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.175 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.175 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.175 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.175 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.176 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.176 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.176 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.176 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.176 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T17:56:53.176266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.177 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.177 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.177 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T17:56:53.177389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.177 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.177 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.178 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.178 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.178 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.178 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.178 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.178 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes volume: 1822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T17:56:53.178777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.179 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.179 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.179 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.179 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.179 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.180 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T17:56:53.180016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.180 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.180 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.181 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.181 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.181 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.181 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.181 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.181 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.182 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.182 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T17:56:53.182076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.182 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.182 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.183 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.183 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.183 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.183 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-28T17:56:53.183460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.183 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.183 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.184 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.184 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.184 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.184 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.184 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.185 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes volume: 1884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.185 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T17:56:53.184829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.185 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.185 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.186 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.186 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.186 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.186 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T17:56:53.186384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.186 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.187 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.187 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.187 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.187 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.187 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.187 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.187 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T17:56:53.187381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.187 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.188 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.188 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.188 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.188 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T17:56:53.188380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.188 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.188 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.188 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.189 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:56:53.192 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:56:53 compute-0 nova_compute[189296]: 2025-11-28 17:56:53.438 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:53 compute-0 nova_compute[189296]: 2025-11-28 17:56:53.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:56:53 compute-0 nova_compute[189296]: 2025-11-28 17:56:53.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 17:56:55 compute-0 nova_compute[189296]: 2025-11-28 17:56:55.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:56:55 compute-0 nova_compute[189296]: 2025-11-28 17:56:55.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:56:56 compute-0 nova_compute[189296]: 2025-11-28 17:56:56.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:56:56 compute-0 nova_compute[189296]: 2025-11-28 17:56:56.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 17:56:56 compute-0 nova_compute[189296]: 2025-11-28 17:56:56.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 17:56:57 compute-0 nova_compute[189296]: 2025-11-28 17:56:57.191 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 17:56:57 compute-0 nova_compute[189296]: 2025-11-28 17:56:57.192 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 17:56:57 compute-0 nova_compute[189296]: 2025-11-28 17:56:57.192 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 17:56:57 compute-0 nova_compute[189296]: 2025-11-28 17:56:57.193 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 17:56:57 compute-0 nova_compute[189296]: 2025-11-28 17:56:57.942 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:58 compute-0 nova_compute[189296]: 2025-11-28 17:56:58.441 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:56:59 compute-0 podman[203494]: time="2025-11-28T17:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:56:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 17:56:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4755 "" "Go-http-client/1.1"
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.217 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.683 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.684 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.684 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.685 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.685 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.686 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.686 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.722 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.723 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.723 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.724 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.811 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.872 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.873 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.952 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:00 compute-0 nova_compute[189296]: 2025-11-28 17:57:00.953 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.043 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.044 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.102 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.402 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.403 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5249MB free_disk=72.38526153564453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.404 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.404 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:01 compute-0 openstack_network_exporter[205632]: ERROR   17:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:57:01 compute-0 openstack_network_exporter[205632]: ERROR   17:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:57:01 compute-0 openstack_network_exporter[205632]: ERROR   17:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:57:01 compute-0 openstack_network_exporter[205632]: ERROR   17:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:57:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:57:01 compute-0 openstack_network_exporter[205632]: ERROR   17:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:57:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.559 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.560 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.560 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.618 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.637 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.680 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 17:57:01 compute-0 nova_compute[189296]: 2025-11-28 17:57:01.681 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:02 compute-0 podman[239360]: 2025-11-28 17:57:02.000645166 +0000 UTC m=+0.061809640 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 28 17:57:02 compute-0 podman[239359]: 2025-11-28 17:57:02.003084417 +0000 UTC m=+0.067238094 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, version=9.6)
Nov 28 17:57:02 compute-0 podman[239361]: 2025-11-28 17:57:02.038320733 +0000 UTC m=+0.094263978 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 17:57:02 compute-0 nova_compute[189296]: 2025-11-28 17:57:02.946 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:03.269 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 17:57:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:03.270 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 17:57:03 compute-0 nova_compute[189296]: 2025-11-28 17:57:03.271 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:03 compute-0 nova_compute[189296]: 2025-11-28 17:57:03.443 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:06.273 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:57:07 compute-0 nova_compute[189296]: 2025-11-28 17:57:07.947 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:08 compute-0 podman[239415]: 2025-11-28 17:57:08.017925083 +0000 UTC m=+0.065039410 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 17:57:08 compute-0 podman[239416]: 2025-11-28 17:57:08.050706778 +0000 UTC m=+0.092258969 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.446 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.504 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.504 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.520 189300 DEBUG nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.591 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.592 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.601 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.601 189300 INFO nova.compute.claims [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.744 189300 DEBUG nova.compute.provider_tree [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.775 189300 DEBUG nova.scheduler.client.report [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.794 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.795 189300 DEBUG nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.841 189300 DEBUG nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.842 189300 DEBUG nova.network.neutron [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.865 189300 INFO nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 17:57:08 compute-0 nova_compute[189296]: 2025-11-28 17:57:08.916 189300 DEBUG nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.016 189300 DEBUG nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.017 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.018 189300 INFO nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Creating image(s)#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.018 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.019 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.020 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.033 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.098 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.100 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.100 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.113 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.175 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.177 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598,backing_fmt=raw /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.226 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598,backing_fmt=raw /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.228 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.228 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.323 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.324 189300 DEBUG nova.virt.disk.api [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Checking if we can resize image /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.324 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.389 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.391 189300 DEBUG nova.virt.disk.api [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Cannot resize image /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.391 189300 DEBUG nova.objects.instance [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'migration_context' on Instance uuid 3e7aebb1-2fd3-449c-be21-02c4d1b57717 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.413 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.414 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.415 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.432 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.497 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.498 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.499 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.510 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.577 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.579 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.628 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.630 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.631 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.695 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.696 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.696 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Ensure instance console log exists: /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.697 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.697 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:09 compute-0 nova_compute[189296]: 2025-11-28 17:57:09.698 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:11 compute-0 podman[239482]: 2025-11-28 17:57:11.027166858 +0000 UTC m=+0.082576231 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, container_name=kepler, name=ubi9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc.)
Nov 28 17:57:11 compute-0 podman[239481]: 2025-11-28 17:57:11.03862528 +0000 UTC m=+0.096636397 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 17:57:11 compute-0 nova_compute[189296]: 2025-11-28 17:57:11.464 189300 DEBUG nova.network.neutron [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Successfully updated port: b0754721-6c06-49b9-8437-3ed1125ed2c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 17:57:11 compute-0 nova_compute[189296]: 2025-11-28 17:57:11.484 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 17:57:11 compute-0 nova_compute[189296]: 2025-11-28 17:57:11.484 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquired lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 17:57:11 compute-0 nova_compute[189296]: 2025-11-28 17:57:11.484 189300 DEBUG nova.network.neutron [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 17:57:11 compute-0 nova_compute[189296]: 2025-11-28 17:57:11.582 189300 DEBUG nova.compute.manager [req-2a29e854-772c-44cb-b10f-ec5c553adab3 req-9469940e-487c-4b1a-94ba-615748915432 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Received event network-changed-b0754721-6c06-49b9-8437-3ed1125ed2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 17:57:11 compute-0 nova_compute[189296]: 2025-11-28 17:57:11.582 189300 DEBUG nova.compute.manager [req-2a29e854-772c-44cb-b10f-ec5c553adab3 req-9469940e-487c-4b1a-94ba-615748915432 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Refreshing instance network info cache due to event network-changed-b0754721-6c06-49b9-8437-3ed1125ed2c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 17:57:11 compute-0 nova_compute[189296]: 2025-11-28 17:57:11.582 189300 DEBUG oslo_concurrency.lockutils [req-2a29e854-772c-44cb-b10f-ec5c553adab3 req-9469940e-487c-4b1a-94ba-615748915432 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 17:57:11 compute-0 nova_compute[189296]: 2025-11-28 17:57:11.664 189300 DEBUG nova.network.neutron [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.889 189300 DEBUG nova.network.neutron [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updating instance_info_cache with network_info: [{"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.912 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Releasing lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.913 189300 DEBUG nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Instance network_info: |[{"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.913 189300 DEBUG oslo_concurrency.lockutils [req-2a29e854-772c-44cb-b10f-ec5c553adab3 req-9469940e-487c-4b1a-94ba-615748915432 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.913 189300 DEBUG nova.network.neutron [req-2a29e854-772c-44cb-b10f-ec5c553adab3 req-9469940e-487c-4b1a-94ba-615748915432 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Refreshing network info cache for port b0754721-6c06-49b9-8437-3ed1125ed2c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.916 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Start _get_guest_xml network_info=[{"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T17:54:35Z,direct_url=<?>,disk_format='qcow2',id=f54c2688-82d2-4cd3-8c3b-96e774162948,min_disk=0,min_ram=0,name='cirros',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T17:54:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}], 'ephemerals': [{'device_type': 'disk', 'guest_format': None, 'size': 1, 'encryption_options': None, 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.923 189300 WARNING nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.932 189300 DEBUG nova.virt.libvirt.host [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.932 189300 DEBUG nova.virt.libvirt.host [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.937 189300 DEBUG nova.virt.libvirt.host [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.937 189300 DEBUG nova.virt.libvirt.host [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.937 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.938 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T17:54:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='e125fa74-9e9f-47dc-8c8e-699980f99f10',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T17:54:35Z,direct_url=<?>,disk_format='qcow2',id=f54c2688-82d2-4cd3-8c3b-96e774162948,min_disk=0,min_ram=0,name='cirros',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T17:54:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.938 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.938 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.938 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.938 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.939 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.939 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.939 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.939 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.939 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.940 189300 DEBUG nova.virt.hardware [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.943 189300 DEBUG nova.virt.libvirt.vif [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T17:57:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s',id=2,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-i6lofcfj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T17:57:08Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2NTE5ODY4NDk5NTU1NzM0NzU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDY1MTk4Njg0OTk1NTU3MzQ3NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2NTE5ODY4NDk5NTU1NzM0NzU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 28 17:57:12 compute-0 nova_compute[189296]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDY1MTk4Njg0OTk1NTU3MzQ3NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA2NTE5ODY4NDk5NTU1NzM0NzU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=3e7aebb1-2fd3-449c-be21-02c4d1b57717,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.943 189300 DEBUG nova.network.os_vif_util [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.944 189300 DEBUG nova.network.os_vif_util [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:bc:ca,bridge_name='br-int',has_traffic_filtering=True,id=b0754721-6c06-49b9-8437-3ed1125ed2c6,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0754721-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.944 189300 DEBUG nova.objects.instance [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e7aebb1-2fd3-449c-be21-02c4d1b57717 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.950 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.965 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] End _get_guest_xml xml=<domain type="kvm">
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <uuid>3e7aebb1-2fd3-449c-be21-02c4d1b57717</uuid>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <name>instance-00000002</name>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <memory>524288</memory>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <metadata>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <nova:name>vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s</nova:name>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 17:57:12</nova:creationTime>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <nova:flavor name="m1.small">
Nov 28 17:57:12 compute-0 nova_compute[189296]:        <nova:memory>512</nova:memory>
Nov 28 17:57:12 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 17:57:12 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 17:57:12 compute-0 nova_compute[189296]:        <nova:ephemeral>1</nova:ephemeral>
Nov 28 17:57:12 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 17:57:12 compute-0 nova_compute[189296]:        <nova:user uuid="6a35450c34a344b1a4e63aae1be2b971">admin</nova:user>
Nov 28 17:57:12 compute-0 nova_compute[189296]:        <nova:project uuid="79ee04b003ca4eb8a045699c7852a8b0">admin</nova:project>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="f54c2688-82d2-4cd3-8c3b-96e774162948"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 17:57:12 compute-0 nova_compute[189296]:        <nova:port uuid="b0754721-6c06-49b9-8437-3ed1125ed2c6">
Nov 28 17:57:12 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="192.168.0.158" ipVersion="4"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  </metadata>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <system>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <entry name="serial">3e7aebb1-2fd3-449c-be21-02c4d1b57717</entry>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <entry name="uuid">3e7aebb1-2fd3-449c-be21-02c4d1b57717</entry>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    </system>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <os>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  </os>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <features>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <apic/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  </features>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  </clock>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  </cpu>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  <devices>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    </disk>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <target dev="vdb" bus="virtio"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    </disk>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.config"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    </disk>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:4f:bc:ca"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <target dev="tapb0754721-6c"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    </interface>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/console.log" append="off"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    </serial>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <video>
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    </video>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    </rng>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 17:57:12 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 17:57:12 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 17:57:12 compute-0 nova_compute[189296]:  </devices>
Nov 28 17:57:12 compute-0 nova_compute[189296]: </domain>
Nov 28 17:57:12 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.966 189300 DEBUG nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Preparing to wait for external event network-vif-plugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.966 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.966 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.966 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.967 189300 DEBUG nova.virt.libvirt.vif [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T17:57:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s',id=2,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-i6lofcfj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T17:57:08Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2NTE5ODY4NDk5NTU1NzM0NzU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDY1MTk4Njg0OTk1NTU3MzQ3NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2NTE5ODY4NDk5NTU1NzM0NzU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 28 17:57:12 compute-0 nova_compute[189296]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDY1MTk4Njg0OTk1NTU3MzQ3NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA2NTE5ODY4NDk5NTU1NzM0NzU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=3e7aebb1-2fd3-449c-be21-02c4d1b57717,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.967 189300 DEBUG nova.network.os_vif_util [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.967 189300 DEBUG nova.network.os_vif_util [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:bc:ca,bridge_name='br-int',has_traffic_filtering=True,id=b0754721-6c06-49b9-8437-3ed1125ed2c6,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0754721-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.968 189300 DEBUG os_vif [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:bc:ca,bridge_name='br-int',has_traffic_filtering=True,id=b0754721-6c06-49b9-8437-3ed1125ed2c6,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0754721-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.968 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.968 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.969 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.972 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.972 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0754721-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.972 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0754721-6c, col_values=(('external_ids', {'iface-id': 'b0754721-6c06-49b9-8437-3ed1125ed2c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:bc:ca', 'vm-uuid': '3e7aebb1-2fd3-449c-be21-02c4d1b57717'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.974 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:12 compute-0 NetworkManager[56307]: <info>  [1764352632.9766] manager: (tapb0754721-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.976 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.983 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:12 compute-0 nova_compute[189296]: 2025-11-28 17:57:12.984 189300 INFO os_vif [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:bc:ca,bridge_name='br-int',has_traffic_filtering=True,id=b0754721-6c06-49b9-8437-3ed1125ed2c6,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0754721-6c')#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.033 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.033 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.033 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.033 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No VIF found with MAC fa:16:3e:4f:bc:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.034 189300 INFO nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Using config drive#033[00m
Nov 28 17:57:13 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 17:57:12.943 189300 DEBUG nova.virt.libvirt.vif [None req-422d7d2b-51 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 17:57:13 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 17:57:12.967 189300 DEBUG nova.virt.libvirt.vif [None req-422d7d2b-51 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.390 189300 INFO nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Creating config drive at /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.config#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.396 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu87f64kh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.518 189300 DEBUG oslo_concurrency.processutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu87f64kh" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:13 compute-0 kernel: tapb0754721-6c: entered promiscuous mode
Nov 28 17:57:13 compute-0 NetworkManager[56307]: <info>  [1764352633.5997] manager: (tapb0754721-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Nov 28 17:57:13 compute-0 ovn_controller[97771]: 2025-11-28T17:57:13Z|00035|binding|INFO|Claiming lport b0754721-6c06-49b9-8437-3ed1125ed2c6 for this chassis.
Nov 28 17:57:13 compute-0 ovn_controller[97771]: 2025-11-28T17:57:13Z|00036|binding|INFO|b0754721-6c06-49b9-8437-3ed1125ed2c6: Claiming fa:16:3e:4f:bc:ca 192.168.0.158
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.602 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.607 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:bc:ca 192.168.0.158'], port_security=['fa:16:3e:4f:bc:ca 192.168.0.158'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-po7lv7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-port-slpgfh5aovby', 'neutron:cidrs': '192.168.0.158/24', 'neutron:device_id': '3e7aebb1-2fd3-449c-be21-02c4d1b57717', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-po7lv7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-port-slpgfh5aovby', 'neutron:project_id': '79ee04b003ca4eb8a045699c7852a8b0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a309e23b-efb6-4377-8050-5a658324ee07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37710b57-0bdd-4c1a-aa8d-366aa83fbf51, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=b0754721-6c06-49b9-8437-3ed1125ed2c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.609 106624 INFO neutron.agent.ovn.metadata.agent [-] Port b0754721-6c06-49b9-8437-3ed1125ed2c6 in datapath 5cc11a5f-7338-49fd-ba02-2db7ff676c4f bound to our chassis#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.610 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc11a5f-7338-49fd-ba02-2db7ff676c4f#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.618 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:13 compute-0 ovn_controller[97771]: 2025-11-28T17:57:13Z|00037|binding|INFO|Setting lport b0754721-6c06-49b9-8437-3ed1125ed2c6 ovn-installed in OVS
Nov 28 17:57:13 compute-0 ovn_controller[97771]: 2025-11-28T17:57:13Z|00038|binding|INFO|Setting lport b0754721-6c06-49b9-8437-3ed1125ed2c6 up in Southbound
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.620 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.630 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[46b2cb08-2b7c-406d-b103-1568e7e83f36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:57:13 compute-0 systemd-machined[155703]: New machine qemu-2-instance-00000002.
Nov 28 17:57:13 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.670 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[a2920334-de5d-49b0-a51b-55266b62242c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.674 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[a7066ad1-ed3c-4e64-8453-102f5e0c6171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:57:13 compute-0 systemd-udevd[239572]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.708 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a62f11-3f22-417e-8958-225061476555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:57:13 compute-0 NetworkManager[56307]: <info>  [1764352633.7110] device (tapb0754721-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 17:57:13 compute-0 NetworkManager[56307]: <info>  [1764352633.7117] device (tapb0754721-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 17:57:13 compute-0 podman[239536]: 2025-11-28 17:57:13.718671993 +0000 UTC m=+0.134093476 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.732 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[81295771-1aca-4168-a5ae-02c066e55e9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc11a5f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:38:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370971, 'reachable_time': 20370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239582, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.748 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa4a945-86dd-4a79-b7fb-28a87740b4c2]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370983, 'tstamp': 370983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239585, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370986, 'tstamp': 370986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239585, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.750 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc11a5f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.752 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.753 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.754 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc11a5f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.754 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.754 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc11a5f-70, col_values=(('external_ids', {'iface-id': '467e3797-177d-4174-b963-0efbd15595b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 17:57:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:13.754 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.840 189300 DEBUG nova.compute.manager [req-960ba057-a843-4db8-99a0-4e75ea840510 req-708b0bb3-20b7-49ff-99a7-72d304e3dda5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Received event network-vif-plugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.840 189300 DEBUG oslo_concurrency.lockutils [req-960ba057-a843-4db8-99a0-4e75ea840510 req-708b0bb3-20b7-49ff-99a7-72d304e3dda5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.841 189300 DEBUG oslo_concurrency.lockutils [req-960ba057-a843-4db8-99a0-4e75ea840510 req-708b0bb3-20b7-49ff-99a7-72d304e3dda5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.842 189300 DEBUG oslo_concurrency.lockutils [req-960ba057-a843-4db8-99a0-4e75ea840510 req-708b0bb3-20b7-49ff-99a7-72d304e3dda5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.842 189300 DEBUG nova.compute.manager [req-960ba057-a843-4db8-99a0-4e75ea840510 req-708b0bb3-20b7-49ff-99a7-72d304e3dda5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Processing event network-vif-plugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.954 189300 DEBUG nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.955 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352633.9540548, 3e7aebb1-2fd3-449c-be21-02c4d1b57717 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.955 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] VM Started (Lifecycle Event)#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.959 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.963 189300 INFO nova.virt.libvirt.driver [-] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Instance spawned successfully.#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.963 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.975 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.983 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.986 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.986 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.987 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.987 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.988 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:57:13 compute-0 nova_compute[189296]: 2025-11-28 17:57:13.988 189300 DEBUG nova.virt.libvirt.driver [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.008 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.008 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352633.9542751, 3e7aebb1-2fd3-449c-be21-02c4d1b57717 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.009 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] VM Paused (Lifecycle Event)#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.028 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.033 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352633.9583945, 3e7aebb1-2fd3-449c-be21-02c4d1b57717 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.033 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] VM Resumed (Lifecycle Event)#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.052 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.057 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.079 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.088 189300 INFO nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Took 5.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.088 189300 DEBUG nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.135 189300 INFO nova.compute.manager [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Took 5.57 seconds to build instance.#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.149 189300 DEBUG oslo_concurrency.lockutils [None req-422d7d2b-5138-4b09-bf2d-1638a01023f8 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.433 189300 DEBUG nova.network.neutron [req-2a29e854-772c-44cb-b10f-ec5c553adab3 req-9469940e-487c-4b1a-94ba-615748915432 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updated VIF entry in instance network info cache for port b0754721-6c06-49b9-8437-3ed1125ed2c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.433 189300 DEBUG nova.network.neutron [req-2a29e854-772c-44cb-b10f-ec5c553adab3 req-9469940e-487c-4b1a-94ba-615748915432 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updating instance_info_cache with network_info: [{"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 17:57:14 compute-0 nova_compute[189296]: 2025-11-28 17:57:14.451 189300 DEBUG oslo_concurrency.lockutils [req-2a29e854-772c-44cb-b10f-ec5c553adab3 req-9469940e-487c-4b1a-94ba-615748915432 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 17:57:15 compute-0 nova_compute[189296]: 2025-11-28 17:57:15.920 189300 DEBUG nova.compute.manager [req-56c3aad6-4751-4310-ae62-c54aa156be69 req-1ce622e1-575f-4f14-95c1-7176a760c3b7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Received event network-vif-plugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 17:57:15 compute-0 nova_compute[189296]: 2025-11-28 17:57:15.921 189300 DEBUG oslo_concurrency.lockutils [req-56c3aad6-4751-4310-ae62-c54aa156be69 req-1ce622e1-575f-4f14-95c1-7176a760c3b7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:15 compute-0 nova_compute[189296]: 2025-11-28 17:57:15.922 189300 DEBUG oslo_concurrency.lockutils [req-56c3aad6-4751-4310-ae62-c54aa156be69 req-1ce622e1-575f-4f14-95c1-7176a760c3b7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:15 compute-0 nova_compute[189296]: 2025-11-28 17:57:15.922 189300 DEBUG oslo_concurrency.lockutils [req-56c3aad6-4751-4310-ae62-c54aa156be69 req-1ce622e1-575f-4f14-95c1-7176a760c3b7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:15 compute-0 nova_compute[189296]: 2025-11-28 17:57:15.922 189300 DEBUG nova.compute.manager [req-56c3aad6-4751-4310-ae62-c54aa156be69 req-1ce622e1-575f-4f14-95c1-7176a760c3b7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] No waiting events found dispatching network-vif-plugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 17:57:15 compute-0 nova_compute[189296]: 2025-11-28 17:57:15.923 189300 WARNING nova.compute.manager [req-56c3aad6-4751-4310-ae62-c54aa156be69 req-1ce622e1-575f-4f14-95c1-7176a760c3b7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Received unexpected event network-vif-plugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 for instance with vm_state active and task_state None.#033[00m
Nov 28 17:57:17 compute-0 nova_compute[189296]: 2025-11-28 17:57:17.953 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:17 compute-0 nova_compute[189296]: 2025-11-28 17:57:17.974 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:22 compute-0 nova_compute[189296]: 2025-11-28 17:57:22.956 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:22 compute-0 nova_compute[189296]: 2025-11-28 17:57:22.977 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:23 compute-0 podman[239596]: 2025-11-28 17:57:23.015446776 +0000 UTC m=+0.073667431 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 17:57:27 compute-0 nova_compute[189296]: 2025-11-28 17:57:27.959 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:27 compute-0 nova_compute[189296]: 2025-11-28 17:57:27.980 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:29 compute-0 podman[203494]: time="2025-11-28T17:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:57:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 17:57:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4759 "" "Go-http-client/1.1"
Nov 28 17:57:31 compute-0 openstack_network_exporter[205632]: ERROR   17:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:57:31 compute-0 openstack_network_exporter[205632]: ERROR   17:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:57:31 compute-0 openstack_network_exporter[205632]: ERROR   17:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:57:31 compute-0 openstack_network_exporter[205632]: ERROR   17:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:57:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:57:31 compute-0 openstack_network_exporter[205632]: ERROR   17:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:57:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:57:32 compute-0 nova_compute[189296]: 2025-11-28 17:57:32.962 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:32 compute-0 nova_compute[189296]: 2025-11-28 17:57:32.982 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:33 compute-0 podman[239623]: 2025-11-28 17:57:33.044281437 +0000 UTC m=+0.095416297 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 17:57:33 compute-0 podman[239622]: 2025-11-28 17:57:33.062589407 +0000 UTC m=+0.121509289 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 28 17:57:33 compute-0 podman[239624]: 2025-11-28 17:57:33.063293914 +0000 UTC m=+0.111445451 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 28 17:57:37 compute-0 nova_compute[189296]: 2025-11-28 17:57:37.967 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:37 compute-0 nova_compute[189296]: 2025-11-28 17:57:37.984 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:38 compute-0 podman[239679]: 2025-11-28 17:57:38.998355676 +0000 UTC m=+0.060445408 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 28 17:57:39 compute-0 podman[239680]: 2025-11-28 17:57:39.026743844 +0000 UTC m=+0.080358468 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 28 17:57:42 compute-0 podman[239716]: 2025-11-28 17:57:42.0030602 +0000 UTC m=+0.065109682 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.4, distribution-scope=public, build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 28 17:57:42 compute-0 podman[239715]: 2025-11-28 17:57:42.021561335 +0000 UTC m=+0.087710938 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 17:57:42 compute-0 nova_compute[189296]: 2025-11-28 17:57:42.971 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:42 compute-0 nova_compute[189296]: 2025-11-28 17:57:42.987 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:43 compute-0 ovn_controller[97771]: 2025-11-28T17:57:43Z|00039|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 28 17:57:44 compute-0 podman[239759]: 2025-11-28 17:57:44.079411342 +0000 UTC m=+0.143165670 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 28 17:57:47 compute-0 ovn_controller[97771]: 2025-11-28T17:57:47Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:bc:ca 192.168.0.158
Nov 28 17:57:47 compute-0 ovn_controller[97771]: 2025-11-28T17:57:47Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:bc:ca 192.168.0.158
Nov 28 17:57:47 compute-0 nova_compute[189296]: 2025-11-28 17:57:47.973 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:47 compute-0 nova_compute[189296]: 2025-11-28 17:57:47.989 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:49 compute-0 nova_compute[189296]: 2025-11-28 17:57:49.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:49 compute-0 nova_compute[189296]: 2025-11-28 17:57:49.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 28 17:57:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:52.601 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:52.602 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:57:52.603 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:52 compute-0 nova_compute[189296]: 2025-11-28 17:57:52.755 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:52 compute-0 nova_compute[189296]: 2025-11-28 17:57:52.755 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 28 17:57:52 compute-0 nova_compute[189296]: 2025-11-28 17:57:52.773 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 28 17:57:52 compute-0 nova_compute[189296]: 2025-11-28 17:57:52.978 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:52 compute-0 nova_compute[189296]: 2025-11-28 17:57:52.991 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:52 compute-0 nova_compute[189296]: 2025-11-28 17:57:52.993 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:53 compute-0 nova_compute[189296]: 2025-11-28 17:57:53.015 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Triggering sync for uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 28 17:57:53 compute-0 nova_compute[189296]: 2025-11-28 17:57:53.015 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Triggering sync for uuid 3e7aebb1-2fd3-449c-be21-02c4d1b57717 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 28 17:57:53 compute-0 nova_compute[189296]: 2025-11-28 17:57:53.016 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:53 compute-0 nova_compute[189296]: 2025-11-28 17:57:53.016 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:53 compute-0 nova_compute[189296]: 2025-11-28 17:57:53.017 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:53 compute-0 nova_compute[189296]: 2025-11-28 17:57:53.017 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:53 compute-0 nova_compute[189296]: 2025-11-28 17:57:53.067 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:53 compute-0 nova_compute[189296]: 2025-11-28 17:57:53.068 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:53 compute-0 nova_compute[189296]: 2025-11-28 17:57:53.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:54 compute-0 podman[239800]: 2025-11-28 17:57:54.026379025 +0000 UTC m=+0.087011010 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 17:57:54 compute-0 nova_compute[189296]: 2025-11-28 17:57:54.635 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:54 compute-0 nova_compute[189296]: 2025-11-28 17:57:54.636 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 17:57:55 compute-0 nova_compute[189296]: 2025-11-28 17:57:55.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:55 compute-0 nova_compute[189296]: 2025-11-28 17:57:55.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:57 compute-0 nova_compute[189296]: 2025-11-28 17:57:57.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:57 compute-0 nova_compute[189296]: 2025-11-28 17:57:57.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 17:57:57 compute-0 nova_compute[189296]: 2025-11-28 17:57:57.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 17:57:57 compute-0 nova_compute[189296]: 2025-11-28 17:57:57.980 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:57 compute-0 nova_compute[189296]: 2025-11-28 17:57:57.993 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:57:58 compute-0 nova_compute[189296]: 2025-11-28 17:57:58.259 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 17:57:58 compute-0 nova_compute[189296]: 2025-11-28 17:57:58.260 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 17:57:58 compute-0 nova_compute[189296]: 2025-11-28 17:57:58.261 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 17:57:58 compute-0 nova_compute[189296]: 2025-11-28 17:57:58.261 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.316 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.330 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.330 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.331 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.331 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.332 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.351 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.352 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.352 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.352 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.434 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.494 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.495 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.590 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.591 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.653 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.654 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.712 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.719 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:59 compute-0 podman[203494]: time="2025-11-28T17:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:57:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 17:57:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4764 "" "Go-http-client/1.1"
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.777 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.778 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.831 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.832 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.892 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.893 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:57:59 compute-0 nova_compute[189296]: 2025-11-28 17:57:59.973 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.314 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.315 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5080MB free_disk=72.36323928833008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.315 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.316 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.483 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.483 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 3e7aebb1-2fd3-449c-be21-02c4d1b57717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.484 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.484 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.629 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.645 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.670 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.671 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:58:00 compute-0 nova_compute[189296]: 2025-11-28 17:58:00.964 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:58:01 compute-0 openstack_network_exporter[205632]: ERROR   17:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:58:01 compute-0 openstack_network_exporter[205632]: ERROR   17:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:58:01 compute-0 openstack_network_exporter[205632]: ERROR   17:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:58:01 compute-0 openstack_network_exporter[205632]: ERROR   17:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:58:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:58:01 compute-0 openstack_network_exporter[205632]: ERROR   17:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:58:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:58:01 compute-0 nova_compute[189296]: 2025-11-28 17:58:01.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:58:01 compute-0 nova_compute[189296]: 2025-11-28 17:58:01.648 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:58:02 compute-0 nova_compute[189296]: 2025-11-28 17:58:02.983 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:02 compute-0 nova_compute[189296]: 2025-11-28 17:58:02.995 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:04 compute-0 podman[239850]: 2025-11-28 17:58:04.010326399 +0000 UTC m=+0.075048665 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, release=1755695350)
Nov 28 17:58:04 compute-0 podman[239852]: 2025-11-28 17:58:04.031739176 +0000 UTC m=+0.089200354 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd)
Nov 28 17:58:04 compute-0 podman[239851]: 2025-11-28 17:58:04.034435393 +0000 UTC m=+0.096920165 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=f26160204c78771e78cdd2489258319b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 28 17:58:07 compute-0 nova_compute[189296]: 2025-11-28 17:58:07.985 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:07 compute-0 nova_compute[189296]: 2025-11-28 17:58:07.997 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:10 compute-0 podman[239908]: 2025-11-28 17:58:10.003479592 +0000 UTC m=+0.064660895 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 17:58:10 compute-0 podman[239909]: 2025-11-28 17:58:10.057425254 +0000 UTC m=+0.112391146 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 28 17:58:12 compute-0 nova_compute[189296]: 2025-11-28 17:58:12.988 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:12 compute-0 nova_compute[189296]: 2025-11-28 17:58:12.999 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:13 compute-0 podman[239947]: 2025-11-28 17:58:13.067808589 +0000 UTC m=+0.113750598 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, release=1214.1726694543, distribution-scope=public, version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 28 17:58:13 compute-0 podman[239946]: 2025-11-28 17:58:13.069713166 +0000 UTC m=+0.116504005 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 17:58:14 compute-0 podman[239985]: 2025-11-28 17:58:14.827874419 +0000 UTC m=+0.135585800 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 28 17:58:17 compute-0 nova_compute[189296]: 2025-11-28 17:58:17.991 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:18 compute-0 nova_compute[189296]: 2025-11-28 17:58:18.001 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:22 compute-0 nova_compute[189296]: 2025-11-28 17:58:22.994 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:23 compute-0 nova_compute[189296]: 2025-11-28 17:58:23.003 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:25 compute-0 podman[240012]: 2025-11-28 17:58:25.014237731 +0000 UTC m=+0.072819003 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 17:58:27 compute-0 nova_compute[189296]: 2025-11-28 17:58:27.997 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:28 compute-0 nova_compute[189296]: 2025-11-28 17:58:28.005 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:29 compute-0 podman[203494]: time="2025-11-28T17:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:58:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 17:58:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4771 "" "Go-http-client/1.1"
Nov 28 17:58:31 compute-0 openstack_network_exporter[205632]: ERROR   17:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:58:31 compute-0 openstack_network_exporter[205632]: ERROR   17:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:58:31 compute-0 openstack_network_exporter[205632]: ERROR   17:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:58:31 compute-0 openstack_network_exporter[205632]: ERROR   17:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:58:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:58:31 compute-0 openstack_network_exporter[205632]: ERROR   17:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:58:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:58:33 compute-0 nova_compute[189296]: 2025-11-28 17:58:32.999 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:33 compute-0 nova_compute[189296]: 2025-11-28 17:58:33.008 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:35 compute-0 podman[240036]: 2025-11-28 17:58:35.012040479 +0000 UTC m=+0.072279799 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 28 17:58:35 compute-0 podman[240037]: 2025-11-28 17:58:35.01205473 +0000 UTC m=+0.064506381 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 28 17:58:35 compute-0 podman[240038]: 2025-11-28 17:58:35.017153974 +0000 UTC m=+0.066117939 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Nov 28 17:58:38 compute-0 nova_compute[189296]: 2025-11-28 17:58:38.001 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:38 compute-0 nova_compute[189296]: 2025-11-28 17:58:38.010 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:41 compute-0 podman[240094]: 2025-11-28 17:58:41.023157725 +0000 UTC m=+0.076561455 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 28 17:58:41 compute-0 podman[240093]: 2025-11-28 17:58:41.038421535 +0000 UTC m=+0.091539718 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 28 17:58:43 compute-0 nova_compute[189296]: 2025-11-28 17:58:43.005 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:43 compute-0 nova_compute[189296]: 2025-11-28 17:58:43.011 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:44 compute-0 podman[240133]: 2025-11-28 17:58:44.022706685 +0000 UTC m=+0.075634961 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, release=1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., config_id=edpm, name=ubi9, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-container, release-0.7.12=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 28 17:58:44 compute-0 podman[240132]: 2025-11-28 17:58:44.04430372 +0000 UTC m=+0.096327635 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 17:58:45 compute-0 podman[240172]: 2025-11-28 17:58:45.081107501 +0000 UTC m=+0.137888947 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 28 17:58:48 compute-0 nova_compute[189296]: 2025-11-28 17:58:48.007 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:48 compute-0 nova_compute[189296]: 2025-11-28 17:58:48.012 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.976 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.977 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.986 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d10f9fc-89ea-4059-8532-7e0aec0791d6', 'name': 'test_0', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.991 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 3e7aebb1-2fd3-449c-be21-02c4d1b57717 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.993 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/3e7aebb1-2fd3-449c-be21-02c4d1b57717 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.549 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Fri, 28 Nov 2025 17:58:52 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5c8dd62d-e991-410a-ac32-f31c9d78a568 x-openstack-request-id: req-5c8dd62d-e991-410a-ac32-f31c9d78a568 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.550 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "3e7aebb1-2fd3-449c-be21-02c4d1b57717", "name": "vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s", "status": "ACTIVE", "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "user_id": "6a35450c34a344b1a4e63aae1be2b971", "metadata": {"metering.server_group": "ac6a0a76-f006-4c50-a4a8-904a1f128161"}, "hostId": "db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651", "image": {"id": "f54c2688-82d2-4cd3-8c3b-96e774162948", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/f54c2688-82d2-4cd3-8c3b-96e774162948"}]}, "flavor": {"id": "e125fa74-9e9f-47dc-8c8e-699980f99f10", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e125fa74-9e9f-47dc-8c8e-699980f99f10"}]}, "created": "2025-11-28T17:57:07Z", "updated": "2025-11-28T17:57:14Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.158", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:bc:ca"}, {"version": 4, "addr": "192.168.122.194", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4f:bc:ca"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/3e7aebb1-2fd3-449c-be21-02c4d1b57717"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/3e7aebb1-2fd3-449c-be21-02c4d1b57717"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-28T17:57:14.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.550 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/3e7aebb1-2fd3-449c-be21-02c4d1b57717 used request id req-5c8dd62d-e991-410a-ac32-f31c9d78a568 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.552 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3e7aebb1-2fd3-449c-be21-02c4d1b57717', 'name': 'vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.552 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.553 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.553 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.553 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T17:58:52.553754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.578 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.579 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.580 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:58:52.602 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:58:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:58:52.603 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.603 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:58:52.604 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.604 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.605 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.606 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.606 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.606 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.607 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.607 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.607 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.607 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T17:58:52.607488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.683 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.684 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.684 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.758 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.759 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.759 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.759 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.760 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.760 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.760 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.760 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.760 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.760 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 284678818 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.760 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 69824352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.760 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 37055244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.761 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 321385299 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.761 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 64866438 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.761 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 53024748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.761 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.761 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.762 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.762 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.762 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.762 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.762 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T17:58:52.760468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.762 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T17:58:52.762342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.766 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.769 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 3e7aebb1-2fd3-449c-be21-02c4d1b57717 / tapb0754721-6c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.770 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.770 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.770 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.770 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.770 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.770 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.770 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T17:58:52.770764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.795 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.827 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/memory.usage volume: 49.1953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.827 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.828 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.828 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.828 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.828 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.829 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T17:58:52.828879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.828 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.829 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.830 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.830 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.831 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.831 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.832 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.833 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.833 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.833 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.834 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.834 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.834 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T17:58:52.834585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.835 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.835 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.836 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.837 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 41811968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.837 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.838 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.839 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.840 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.840 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.840 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.840 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.841 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.841 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.843 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.843 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.843 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.843 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.844 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.844 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.845 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 646402207 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.845 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 6041958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.845 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.845 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 988049539 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.845 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 9215217 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.846 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.846 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T17:58:52.840664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.847 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.847 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.847 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T17:58:52.844889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.847 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.847 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.847 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.847 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.847 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.847 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T17:58:52.847485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.848 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.848 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.848 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.848 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.849 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.849 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.849 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.849 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.849 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.850 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.850 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T17:58:52.849998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.850 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.851 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.851 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.851 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.851 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.851 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.851 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.851 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.852 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T17:58:52.851728) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.852 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.852 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.852 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.853 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.853 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.853 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.853 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/cpu volume: 33260000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T17:58:52.853237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.853 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/cpu volume: 54660000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.854 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.854 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.854 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.854 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.854 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.854 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.854 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.855 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-28T17:58:52.854734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.855 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s>]
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.855 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.855 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.855 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.855 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.856 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.856 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.856 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.856 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.856 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.857 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.857 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T17:58:52.856011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.857 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.857 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T17:58:52.857159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.857 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.857 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.857 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.858 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.858 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.858 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.858 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T17:58:52.858225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.858 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.858 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.858 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.859 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.859 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.859 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.859 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T17:58:52.859102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.859 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.859 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.860 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.860 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.860 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.860 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.860 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.860 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.bytes volume: 4780 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.860 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T17:58:52.860370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.861 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.861 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.861 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.861 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.861 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.861 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T17:58:52.861554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.861 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.862 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.862 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.862 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.862 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.863 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.863 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.863 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.863 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.863 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.863 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.863 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes.delta volume: 380 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T17:58:52.863541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.863 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.864 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.864 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.864 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.864 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.864 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.864 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-28T17:58:52.864656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.864 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.864 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s>]
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.865 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.865 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.865 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.865 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.865 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.865 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T17:58:52.865552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.865 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.bytes volume: 4807 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.866 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.866 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.866 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.866 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.866 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.866 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.866 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T17:58:52.866671) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.867 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.867 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.867 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.867 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.867 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.867 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.867 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.867 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T17:58:52.867742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.868 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.868 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.868 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.868 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.868 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.868 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.868 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.868 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.869 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T17:58:52.868794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.869 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.869 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.869 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.870 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.870 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.871 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.871 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.872 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.872 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.872 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.872 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.872 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.873 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.873 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.873 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.873 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.874 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.874 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.874 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.874 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.875 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.875 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.875 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.875 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.876 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.876 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.876 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.876 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.877 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.877 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 17:58:52.877 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 17:58:53 compute-0 nova_compute[189296]: 2025-11-28 17:58:53.009 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:53 compute-0 nova_compute[189296]: 2025-11-28 17:58:53.013 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:54 compute-0 nova_compute[189296]: 2025-11-28 17:58:54.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:58:54 compute-0 nova_compute[189296]: 2025-11-28 17:58:54.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 17:58:55 compute-0 podman[240200]: 2025-11-28 17:58:55.997155982 +0000 UTC m=+0.059676653 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 17:58:56 compute-0 nova_compute[189296]: 2025-11-28 17:58:56.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:58:57 compute-0 nova_compute[189296]: 2025-11-28 17:58:57.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:58:57 compute-0 nova_compute[189296]: 2025-11-28 17:58:57.623 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:58:57 compute-0 nova_compute[189296]: 2025-11-28 17:58:57.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 17:58:58 compute-0 nova_compute[189296]: 2025-11-28 17:58:58.012 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:58 compute-0 nova_compute[189296]: 2025-11-28 17:58:58.015 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:58:58 compute-0 nova_compute[189296]: 2025-11-28 17:58:58.337 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 17:58:58 compute-0 nova_compute[189296]: 2025-11-28 17:58:58.338 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 17:58:58 compute-0 nova_compute[189296]: 2025-11-28 17:58:58.338 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 17:58:59 compute-0 podman[203494]: time="2025-11-28T17:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:58:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 17:58:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4767 "" "Go-http-client/1.1"
Nov 28 17:59:01 compute-0 openstack_network_exporter[205632]: ERROR   17:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:59:01 compute-0 openstack_network_exporter[205632]: ERROR   17:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:59:01 compute-0 openstack_network_exporter[205632]: ERROR   17:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:59:01 compute-0 openstack_network_exporter[205632]: ERROR   17:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:59:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:59:01 compute-0 openstack_network_exporter[205632]: ERROR   17:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:59:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.519 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updating instance_info_cache with network_info: [{"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.547 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.547 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.548 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.548 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.548 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.549 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.573 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.573 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.574 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.574 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.670 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.728 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.729 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.796 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.798 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.864 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.865 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.964 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:59:01 compute-0 nova_compute[189296]: 2025-11-28 17:59:01.974 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.036 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.038 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.096 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.097 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.157 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.159 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.237 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.620 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.621 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5072MB free_disk=72.36323928833008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.622 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.622 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.737 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.737 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 3e7aebb1-2fd3-449c-be21-02c4d1b57717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.737 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.738 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.822 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.867 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.868 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 17:59:02 compute-0 nova_compute[189296]: 2025-11-28 17:59:02.868 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:59:03 compute-0 nova_compute[189296]: 2025-11-28 17:59:03.014 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:03 compute-0 nova_compute[189296]: 2025-11-28 17:59:03.017 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:04 compute-0 nova_compute[189296]: 2025-11-28 17:59:04.945 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:59:06 compute-0 podman[240248]: 2025-11-28 17:59:06.00647376 +0000 UTC m=+0.067022282 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 28 17:59:06 compute-0 podman[240249]: 2025-11-28 17:59:06.010669652 +0000 UTC m=+0.069359729 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 17:59:06 compute-0 podman[240247]: 2025-11-28 17:59:06.022973682 +0000 UTC m=+0.087387178 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 28 17:59:08 compute-0 nova_compute[189296]: 2025-11-28 17:59:08.018 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:11 compute-0 podman[240307]: 2025-11-28 17:59:11.996085232 +0000 UTC m=+0.058417123 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 28 17:59:12 compute-0 podman[240308]: 2025-11-28 17:59:12.0157409 +0000 UTC m=+0.071984073 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 28 17:59:13 compute-0 nova_compute[189296]: 2025-11-28 17:59:13.019 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:14 compute-0 podman[240348]: 2025-11-28 17:59:14.727277402 +0000 UTC m=+0.054581409 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 17:59:14 compute-0 podman[240349]: 2025-11-28 17:59:14.740892603 +0000 UTC m=+0.064247594 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, version=9.4, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container)
Nov 28 17:59:15 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 28 17:59:16 compute-0 podman[240392]: 2025-11-28 17:59:16.060508645 +0000 UTC m=+0.122030320 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 28 17:59:18 compute-0 nova_compute[189296]: 2025-11-28 17:59:18.020 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:18 compute-0 nova_compute[189296]: 2025-11-28 17:59:18.021 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:23 compute-0 nova_compute[189296]: 2025-11-28 17:59:23.023 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:26 compute-0 podman[240420]: 2025-11-28 17:59:26.99931399 +0000 UTC m=+0.059790335 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 17:59:28 compute-0 nova_compute[189296]: 2025-11-28 17:59:28.025 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:28 compute-0 nova_compute[189296]: 2025-11-28 17:59:28.027 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:29 compute-0 podman[203494]: time="2025-11-28T17:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:59:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 17:59:29 compute-0 podman[203494]: @ - - [28/Nov/2025:17:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Nov 28 17:59:31 compute-0 openstack_network_exporter[205632]: ERROR   17:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 17:59:31 compute-0 openstack_network_exporter[205632]: ERROR   17:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:59:31 compute-0 openstack_network_exporter[205632]: ERROR   17:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 17:59:31 compute-0 openstack_network_exporter[205632]: ERROR   17:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 17:59:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:59:31 compute-0 openstack_network_exporter[205632]: ERROR   17:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 17:59:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 17:59:33 compute-0 nova_compute[189296]: 2025-11-28 17:59:33.027 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:37 compute-0 podman[240446]: 2025-11-28 17:59:37.019435639 +0000 UTC m=+0.076971093 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Nov 28 17:59:37 compute-0 podman[240445]: 2025-11-28 17:59:37.035439059 +0000 UTC m=+0.097276418 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, architecture=x86_64, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 17:59:37 compute-0 podman[240447]: 2025-11-28 17:59:37.046476617 +0000 UTC m=+0.100590588 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 28 17:59:38 compute-0 nova_compute[189296]: 2025-11-28 17:59:38.031 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 17:59:43 compute-0 podman[240501]: 2025-11-28 17:59:43.030597706 +0000 UTC m=+0.086735312 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 17:59:43 compute-0 nova_compute[189296]: 2025-11-28 17:59:43.032 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:43 compute-0 nova_compute[189296]: 2025-11-28 17:59:43.034 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:43 compute-0 podman[240502]: 2025-11-28 17:59:43.047451416 +0000 UTC m=+0.094691126 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 28 17:59:45 compute-0 podman[240539]: 2025-11-28 17:59:45.001630019 +0000 UTC m=+0.059344215 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 17:59:45 compute-0 podman[240540]: 2025-11-28 17:59:45.024969117 +0000 UTC m=+0.078782928 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, managed_by=edpm_ansible, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=)
Nov 28 17:59:47 compute-0 podman[240580]: 2025-11-28 17:59:47.069937539 +0000 UTC m=+0.132808623 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 17:59:48 compute-0 nova_compute[189296]: 2025-11-28 17:59:48.035 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:59:52.604 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 17:59:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:59:52.604 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 17:59:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 17:59:52.605 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 17:59:53 compute-0 nova_compute[189296]: 2025-11-28 17:59:53.038 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:56 compute-0 nova_compute[189296]: 2025-11-28 17:59:56.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:59:56 compute-0 nova_compute[189296]: 2025-11-28 17:59:56.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 17:59:57 compute-0 nova_compute[189296]: 2025-11-28 17:59:57.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:59:57 compute-0 nova_compute[189296]: 2025-11-28 17:59:57.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:59:57 compute-0 nova_compute[189296]: 2025-11-28 17:59:57.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 17:59:57 compute-0 nova_compute[189296]: 2025-11-28 17:59:57.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 17:59:58 compute-0 nova_compute[189296]: 2025-11-28 17:59:58.039 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 17:59:58 compute-0 nova_compute[189296]: 2025-11-28 17:59:58.041 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 17:59:58 compute-0 nova_compute[189296]: 2025-11-28 17:59:58.041 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 28 17:59:58 compute-0 nova_compute[189296]: 2025-11-28 17:59:58.041 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 17:59:58 compute-0 nova_compute[189296]: 2025-11-28 17:59:58.042 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 17:59:58 compute-0 nova_compute[189296]: 2025-11-28 17:59:58.043 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 17:59:58 compute-0 podman[240609]: 2025-11-28 17:59:58.059607732 +0000 UTC m=+0.116145477 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 17:59:58 compute-0 nova_compute[189296]: 2025-11-28 17:59:58.405 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 17:59:58 compute-0 nova_compute[189296]: 2025-11-28 17:59:58.406 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 17:59:58 compute-0 nova_compute[189296]: 2025-11-28 17:59:58.406 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 17:59:58 compute-0 nova_compute[189296]: 2025-11-28 17:59:58.406 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 17:59:59 compute-0 podman[203494]: time="2025-11-28T17:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 17:59:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 17:59:59 compute-0 podman[203494]: @ - - [28/Nov/2025:17:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4766 "" "Go-http-client/1.1"
Nov 28 17:59:59 compute-0 nova_compute[189296]: 2025-11-28 17:59:59.799 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 17:59:59 compute-0 nova_compute[189296]: 2025-11-28 17:59:59.848 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 17:59:59 compute-0 nova_compute[189296]: 2025-11-28 17:59:59.848 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 17:59:59 compute-0 nova_compute[189296]: 2025-11-28 17:59:59.849 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 17:59:59 compute-0 nova_compute[189296]: 2025-11-28 17:59:59.849 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:00:00 compute-0 nova_compute[189296]: 2025-11-28 18:00:00.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:00:01 compute-0 openstack_network_exporter[205632]: ERROR   18:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:00:01 compute-0 openstack_network_exporter[205632]: ERROR   18:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:00:01 compute-0 openstack_network_exporter[205632]: ERROR   18:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:00:01 compute-0 openstack_network_exporter[205632]: ERROR   18:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:00:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:00:01 compute-0 openstack_network_exporter[205632]: ERROR   18:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:00:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:00:01 compute-0 nova_compute[189296]: 2025-11-28 18:00:01.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:00:01 compute-0 nova_compute[189296]: 2025-11-28 18:00:01.670 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:00:01 compute-0 nova_compute[189296]: 2025-11-28 18:00:01.670 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:00:01 compute-0 nova_compute[189296]: 2025-11-28 18:00:01.758 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:00:01 compute-0 nova_compute[189296]: 2025-11-28 18:00:01.758 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:00:01 compute-0 nova_compute[189296]: 2025-11-28 18:00:01.759 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:00:01 compute-0 nova_compute[189296]: 2025-11-28 18:00:01.759 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:00:01 compute-0 nova_compute[189296]: 2025-11-28 18:00:01.913 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:00:01 compute-0 nova_compute[189296]: 2025-11-28 18:00:01.971 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:00:01 compute-0 nova_compute[189296]: 2025-11-28 18:00:01.972 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.029 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.030 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.085 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.086 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.143 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.150 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.207 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.208 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.265 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.266 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.323 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.324 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.380 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.719 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.720 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5055MB free_disk=72.36323928833008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.721 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.721 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.805 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.805 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 3e7aebb1-2fd3-449c-be21-02c4d1b57717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.806 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.806 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.892 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.906 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.908 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:00:02 compute-0 nova_compute[189296]: 2025-11-28 18:00:02.908 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:00:03 compute-0 nova_compute[189296]: 2025-11-28 18:00:03.042 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:03 compute-0 nova_compute[189296]: 2025-11-28 18:00:03.048 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:05 compute-0 nova_compute[189296]: 2025-11-28 18:00:05.864 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:00:08 compute-0 podman[240657]: 2025-11-28 18:00:08.006954191 +0000 UTC m=+0.070621360 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350)
Nov 28 18:00:08 compute-0 podman[240658]: 2025-11-28 18:00:08.010731062 +0000 UTC m=+0.071488860 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:00:08 compute-0 podman[240659]: 2025-11-28 18:00:08.038826986 +0000 UTC m=+0.093840765 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 28 18:00:08 compute-0 nova_compute[189296]: 2025-11-28 18:00:08.045 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:08 compute-0 nova_compute[189296]: 2025-11-28 18:00:08.048 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:13 compute-0 nova_compute[189296]: 2025-11-28 18:00:13.045 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:13 compute-0 nova_compute[189296]: 2025-11-28 18:00:13.049 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:14 compute-0 podman[240715]: 2025-11-28 18:00:14.002424875 +0000 UTC m=+0.062678797 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 28 18:00:14 compute-0 podman[240716]: 2025-11-28 18:00:14.030679552 +0000 UTC m=+0.088213637 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:00:16 compute-0 podman[240752]: 2025-11-28 18:00:16.03719433 +0000 UTC m=+0.083401155 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:00:16 compute-0 podman[240753]: 2025-11-28 18:00:16.061042012 +0000 UTC m=+0.109405599 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, distribution-scope=public, container_name=kepler)
Nov 28 18:00:18 compute-0 podman[240793]: 2025-11-28 18:00:18.027938276 +0000 UTC m=+0.094915336 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 28 18:00:18 compute-0 nova_compute[189296]: 2025-11-28 18:00:18.046 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:18 compute-0 nova_compute[189296]: 2025-11-28 18:00:18.051 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:23 compute-0 nova_compute[189296]: 2025-11-28 18:00:23.048 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:23 compute-0 nova_compute[189296]: 2025-11-28 18:00:23.052 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:28 compute-0 nova_compute[189296]: 2025-11-28 18:00:28.049 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:28 compute-0 nova_compute[189296]: 2025-11-28 18:00:28.054 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:29 compute-0 podman[240819]: 2025-11-28 18:00:29.002804419 +0000 UTC m=+0.069870765 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:00:29 compute-0 podman[203494]: time="2025-11-28T18:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:00:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:00:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4774 "" "Go-http-client/1.1"
Nov 28 18:00:31 compute-0 openstack_network_exporter[205632]: ERROR   18:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:00:31 compute-0 openstack_network_exporter[205632]: ERROR   18:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:00:31 compute-0 openstack_network_exporter[205632]: ERROR   18:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:00:31 compute-0 openstack_network_exporter[205632]: ERROR   18:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:00:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:00:31 compute-0 openstack_network_exporter[205632]: ERROR   18:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:00:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:00:33 compute-0 nova_compute[189296]: 2025-11-28 18:00:33.052 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:33 compute-0 nova_compute[189296]: 2025-11-28 18:00:33.056 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:38 compute-0 nova_compute[189296]: 2025-11-28 18:00:38.055 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:39 compute-0 podman[240841]: 2025-11-28 18:00:39.001526665 +0000 UTC m=+0.067731013 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, version=9.6, release=1755695350)
Nov 28 18:00:39 compute-0 podman[240842]: 2025-11-28 18:00:39.015496025 +0000 UTC m=+0.075639396 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 28 18:00:39 compute-0 podman[240843]: 2025-11-28 18:00:39.038526788 +0000 UTC m=+0.096655949 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 28 18:00:43 compute-0 nova_compute[189296]: 2025-11-28 18:00:43.055 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:43 compute-0 nova_compute[189296]: 2025-11-28 18:00:43.058 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:44 compute-0 podman[240898]: 2025-11-28 18:00:44.732045096 +0000 UTC m=+0.064412032 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:00:44 compute-0 podman[240899]: 2025-11-28 18:00:44.746439187 +0000 UTC m=+0.074788945 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:00:46 compute-0 podman[240934]: 2025-11-28 18:00:46.99702082 +0000 UTC m=+0.063372316 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:00:47 compute-0 podman[240935]: 2025-11-28 18:00:47.006210054 +0000 UTC m=+0.068678026 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.4, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=kepler, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git)
Nov 28 18:00:48 compute-0 nova_compute[189296]: 2025-11-28 18:00:48.058 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:48 compute-0 nova_compute[189296]: 2025-11-28 18:00:48.060 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:49 compute-0 podman[240976]: 2025-11-28 18:00:49.060074909 +0000 UTC m=+0.118891381 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.977 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.977 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.977 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.983 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d10f9fc-89ea-4059-8532-7e0aec0791d6', 'name': 'test_0', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.986 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3e7aebb1-2fd3-449c-be21-02c4d1b57717', 'name': 'vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.986 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.986 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.986 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.987 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:51.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:00:51.986981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.007 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.007 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.007 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.028 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.028 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.029 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.029 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.029 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.029 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.029 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.029 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.030 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:00:52.030041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.097 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.098 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.098 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.200 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.201 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.201 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.201 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.202 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.202 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.202 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.202 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.202 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.202 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 284678818 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.203 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 69824352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:00:52.202589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.203 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 37055244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.203 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 321385299 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.204 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 64866438 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.204 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 53024748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.204 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.204 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.205 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.205 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.205 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.205 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:00:52.205429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.209 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.212 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.213 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.213 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.213 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.213 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.213 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:00:52.213897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.235 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.258 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/memory.usage volume: 49.1875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.259 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.259 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.259 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.259 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.259 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.259 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.259 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.259 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.260 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.260 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.260 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.260 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.261 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:00:52.259680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.261 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.261 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.261 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.261 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.261 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.261 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.261 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.262 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.262 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.262 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.262 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.262 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.263 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.263 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.263 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:00:52.261715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.263 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.263 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.263 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.263 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:00:52.263684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.264 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.264 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.264 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.264 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.264 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.264 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.264 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.264 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 646402207 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.265 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:00:52.264735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.265 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 6041958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.265 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.265 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 993438844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.265 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 9215217 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.265 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.266 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.266 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.266 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.266 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.266 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.266 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.266 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.266 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.267 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.267 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.267 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.267 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:00:52.266560) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.268 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.268 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.268 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.268 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.268 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.268 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:00:52.268740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.268 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.269 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.269 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.269 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.269 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.269 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.270 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.270 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:00:52.270149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.270 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.270 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.270 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.270 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.271 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.271 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.271 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.271 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.271 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/cpu volume: 34550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:00:52.271409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.271 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/cpu volume: 173800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.272 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.272 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.272 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.272 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.272 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.272 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.272 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.272 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.273 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:00:52.272803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.273 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.273 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.273 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.273 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.273 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.273 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.274 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.274 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:00:52.273837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.274 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.274 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.274 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.275 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.275 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.275 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:00:52.275073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.275 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.275 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.276 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.276 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.276 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.276 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.276 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:00:52.276228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.277 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.277 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.277 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.277 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.277 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.277 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.277 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.277 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.bytes volume: 4850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:00:52.277489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.278 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.278 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.278 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.278 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.278 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.278 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.278 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.279 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:00:52.278744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.279 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.279 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.279 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.280 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.280 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.280 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.280 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.280 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.280 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.280 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.281 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.281 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:00:52.280950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.281 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.281 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.282 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.282 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.282 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.282 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.282 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.282 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.282 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:00:52.282610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.283 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.bytes volume: 4807 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.283 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.283 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.283 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.283 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.283 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.283 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.284 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.284 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:00:52.283902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.284 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.284 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.284 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.284 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.285 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.285 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.285 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.285 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.285 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.285 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:00:52.285221) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.286 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.286 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.286 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.286 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.286 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.286 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.286 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.287 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.287 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.287 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.287 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:00:52.286502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.288 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:00:52.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:00:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:00:52.604 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:00:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:00:52.605 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:00:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:00:52.606 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:00:53 compute-0 nova_compute[189296]: 2025-11-28 18:00:53.059 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:53 compute-0 nova_compute[189296]: 2025-11-28 18:00:53.062 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:56 compute-0 nova_compute[189296]: 2025-11-28 18:00:56.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:00:56 compute-0 nova_compute[189296]: 2025-11-28 18:00:56.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:00:58 compute-0 nova_compute[189296]: 2025-11-28 18:00:58.061 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:58 compute-0 nova_compute[189296]: 2025-11-28 18:00:58.063 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:00:58 compute-0 nova_compute[189296]: 2025-11-28 18:00:58.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:00:59 compute-0 nova_compute[189296]: 2025-11-28 18:00:59.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:00:59 compute-0 nova_compute[189296]: 2025-11-28 18:00:59.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:00:59 compute-0 podman[203494]: time="2025-11-28T18:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:00:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:00:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4767 "" "Go-http-client/1.1"
Nov 28 18:01:00 compute-0 podman[241001]: 2025-11-28 18:01:00.003706502 +0000 UTC m=+0.068436841 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:01:00 compute-0 nova_compute[189296]: 2025-11-28 18:01:00.429 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:01:00 compute-0 nova_compute[189296]: 2025-11-28 18:01:00.429 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:01:00 compute-0 nova_compute[189296]: 2025-11-28 18:01:00.429 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:01:01 compute-0 openstack_network_exporter[205632]: ERROR   18:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:01:01 compute-0 openstack_network_exporter[205632]: ERROR   18:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:01:01 compute-0 openstack_network_exporter[205632]: ERROR   18:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:01:01 compute-0 openstack_network_exporter[205632]: ERROR   18:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:01:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:01:01 compute-0 openstack_network_exporter[205632]: ERROR   18:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:01:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:01:02 compute-0 nova_compute[189296]: 2025-11-28 18:01:02.775 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updating instance_info_cache with network_info: [{"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:01:02 compute-0 nova_compute[189296]: 2025-11-28 18:01:02.808 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:01:02 compute-0 nova_compute[189296]: 2025-11-28 18:01:02.809 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:01:02 compute-0 nova_compute[189296]: 2025-11-28 18:01:02.809 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:01:02 compute-0 nova_compute[189296]: 2025-11-28 18:01:02.809 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:01:02 compute-0 nova_compute[189296]: 2025-11-28 18:01:02.810 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:01:02 compute-0 nova_compute[189296]: 2025-11-28 18:01:02.810 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:01:03 compute-0 nova_compute[189296]: 2025-11-28 18:01:03.065 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:01:03 compute-0 nova_compute[189296]: 2025-11-28 18:01:03.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:01:03 compute-0 nova_compute[189296]: 2025-11-28 18:01:03.691 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:01:03 compute-0 nova_compute[189296]: 2025-11-28 18:01:03.692 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:01:03 compute-0 nova_compute[189296]: 2025-11-28 18:01:03.693 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:01:03 compute-0 nova_compute[189296]: 2025-11-28 18:01:03.694 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:01:03 compute-0 nova_compute[189296]: 2025-11-28 18:01:03.934 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:01:03 compute-0 nova_compute[189296]: 2025-11-28 18:01:03.997 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:01:03 compute-0 nova_compute[189296]: 2025-11-28 18:01:03.998 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.061 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.062 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.123 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.124 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.189 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.201 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.264 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.266 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.327 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.329 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.392 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.393 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.454 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.775 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.777 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5019MB free_disk=72.36380004882812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.777 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.778 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.860 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.860 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 3e7aebb1-2fd3-449c-be21-02c4d1b57717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.861 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.861 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.883 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing inventories for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.901 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating ProviderTree inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.902 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.915 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing aggregate associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 28 18:01:04 compute-0 nova_compute[189296]: 2025-11-28 18:01:04.950 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing trait associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, traits: HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 28 18:01:05 compute-0 nova_compute[189296]: 2025-11-28 18:01:05.001 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:01:05 compute-0 nova_compute[189296]: 2025-11-28 18:01:05.016 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:01:05 compute-0 nova_compute[189296]: 2025-11-28 18:01:05.018 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:01:05 compute-0 nova_compute[189296]: 2025-11-28 18:01:05.018 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:01:07 compute-0 nova_compute[189296]: 2025-11-28 18:01:07.019 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:01:08 compute-0 nova_compute[189296]: 2025-11-28 18:01:08.066 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:10 compute-0 podman[241062]: 2025-11-28 18:01:10.014756488 +0000 UTC m=+0.073978755 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:01:10 compute-0 podman[241060]: 2025-11-28 18:01:10.036822096 +0000 UTC m=+0.103357462 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9)
Nov 28 18:01:10 compute-0 podman[241061]: 2025-11-28 18:01:10.042026204 +0000 UTC m=+0.105517666 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 28 18:01:13 compute-0 nova_compute[189296]: 2025-11-28 18:01:13.068 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:14 compute-0 podman[241118]: 2025-11-28 18:01:14.999257333 +0000 UTC m=+0.065626721 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 18:01:15 compute-0 podman[241119]: 2025-11-28 18:01:15.020439251 +0000 UTC m=+0.082198046 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:01:18 compute-0 podman[241155]: 2025-11-28 18:01:18.004716359 +0000 UTC m=+0.061646065 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:01:18 compute-0 podman[241156]: 2025-11-28 18:01:18.036917084 +0000 UTC m=+0.090122989 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64)
Nov 28 18:01:18 compute-0 nova_compute[189296]: 2025-11-28 18:01:18.069 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:18 compute-0 nova_compute[189296]: 2025-11-28 18:01:18.071 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:20 compute-0 podman[241194]: 2025-11-28 18:01:20.111649748 +0000 UTC m=+0.170937080 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 28 18:01:23 compute-0 nova_compute[189296]: 2025-11-28 18:01:23.071 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:28 compute-0 nova_compute[189296]: 2025-11-28 18:01:28.073 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:29 compute-0 podman[203494]: time="2025-11-28T18:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:01:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:01:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4766 "" "Go-http-client/1.1"
Nov 28 18:01:30 compute-0 podman[241221]: 2025-11-28 18:01:30.98190787 +0000 UTC m=+0.052436969 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:01:31 compute-0 openstack_network_exporter[205632]: ERROR   18:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:01:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:01:31 compute-0 openstack_network_exporter[205632]: ERROR   18:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:01:31 compute-0 openstack_network_exporter[205632]: ERROR   18:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:01:31 compute-0 openstack_network_exporter[205632]: ERROR   18:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:01:31 compute-0 openstack_network_exporter[205632]: ERROR   18:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:01:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:01:33 compute-0 nova_compute[189296]: 2025-11-28 18:01:33.076 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:01:33 compute-0 nova_compute[189296]: 2025-11-28 18:01:33.077 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:33 compute-0 nova_compute[189296]: 2025-11-28 18:01:33.077 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 28 18:01:33 compute-0 nova_compute[189296]: 2025-11-28 18:01:33.077 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 18:01:33 compute-0 nova_compute[189296]: 2025-11-28 18:01:33.078 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 18:01:33 compute-0 nova_compute[189296]: 2025-11-28 18:01:33.079 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:38 compute-0 nova_compute[189296]: 2025-11-28 18:01:38.079 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:41 compute-0 podman[241244]: 2025-11-28 18:01:41.032736967 +0000 UTC m=+0.086394508 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 18:01:41 compute-0 podman[241245]: 2025-11-28 18:01:41.064864521 +0000 UTC m=+0.116482622 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=f26160204c78771e78cdd2489258319b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 28 18:01:41 compute-0 podman[241246]: 2025-11-28 18:01:41.07629116 +0000 UTC m=+0.117651301 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 28 18:01:43 compute-0 nova_compute[189296]: 2025-11-28 18:01:43.082 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:01:46 compute-0 podman[241301]: 2025-11-28 18:01:46.004022119 +0000 UTC m=+0.070755356 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 28 18:01:46 compute-0 podman[241302]: 2025-11-28 18:01:46.00279576 +0000 UTC m=+0.067190060 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:01:48 compute-0 nova_compute[189296]: 2025-11-28 18:01:48.085 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:48 compute-0 podman[241337]: 2025-11-28 18:01:48.996512869 +0000 UTC m=+0.054561222 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:01:49 compute-0 podman[241338]: 2025-11-28 18:01:49.025233099 +0000 UTC m=+0.076027346 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, release=1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, release-0.7.12=, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=base rhel9, version=9.4, com.redhat.component=ubi9-container)
Nov 28 18:01:51 compute-0 podman[241378]: 2025-11-28 18:01:51.045211337 +0000 UTC m=+0.107038642 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:01:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:01:52.606 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:01:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:01:52.607 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:01:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:01:52.608 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:01:53 compute-0 nova_compute[189296]: 2025-11-28 18:01:53.087 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:53 compute-0 nova_compute[189296]: 2025-11-28 18:01:53.089 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:58 compute-0 nova_compute[189296]: 2025-11-28 18:01:58.088 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:58 compute-0 nova_compute[189296]: 2025-11-28 18:01:58.089 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:01:58 compute-0 nova_compute[189296]: 2025-11-28 18:01:58.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:01:58 compute-0 nova_compute[189296]: 2025-11-28 18:01:58.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:01:59 compute-0 nova_compute[189296]: 2025-11-28 18:01:59.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:01:59 compute-0 nova_compute[189296]: 2025-11-28 18:01:59.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:01:59 compute-0 nova_compute[189296]: 2025-11-28 18:01:59.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:01:59 compute-0 nova_compute[189296]: 2025-11-28 18:01:59.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:01:59 compute-0 podman[203494]: time="2025-11-28T18:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:01:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:01:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4770 "" "Go-http-client/1.1"
Nov 28 18:02:00 compute-0 nova_compute[189296]: 2025-11-28 18:02:00.461 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:02:00 compute-0 nova_compute[189296]: 2025-11-28 18:02:00.463 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:02:00 compute-0 nova_compute[189296]: 2025-11-28 18:02:00.464 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:02:00 compute-0 nova_compute[189296]: 2025-11-28 18:02:00.465 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:02:01 compute-0 openstack_network_exporter[205632]: ERROR   18:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:02:01 compute-0 openstack_network_exporter[205632]: ERROR   18:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:02:01 compute-0 openstack_network_exporter[205632]: ERROR   18:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:02:01 compute-0 openstack_network_exporter[205632]: ERROR   18:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:02:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:02:01 compute-0 openstack_network_exporter[205632]: ERROR   18:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:02:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:02:02 compute-0 podman[241403]: 2025-11-28 18:02:02.007385632 +0000 UTC m=+0.070749786 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.090 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.092 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.092 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.093 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.093 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.095 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.124 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.141 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.142 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.143 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.144 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:03 compute-0 nova_compute[189296]: 2025-11-28 18:02:03.145 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:04 compute-0 nova_compute[189296]: 2025-11-28 18:02:04.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.646 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.674 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.675 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.676 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.676 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.755 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.837 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.838 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.918 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.920 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.986 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:05 compute-0 nova_compute[189296]: 2025-11-28 18:02:05.989 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.058 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.071 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.138 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.140 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.207 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.211 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.284 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.286 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.345 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.674 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.676 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4968MB free_disk=72.36346054077148GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.676 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.677 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.782 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.783 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 3e7aebb1-2fd3-449c-be21-02c4d1b57717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.784 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.784 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.798 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:06.799 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:02:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:06.800 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:02:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:06.800 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.857 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.870 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.872 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:02:06 compute-0 nova_compute[189296]: 2025-11-28 18:02:06.873 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:08 compute-0 nova_compute[189296]: 2025-11-28 18:02:08.092 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:08 compute-0 nova_compute[189296]: 2025-11-28 18:02:08.095 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:08 compute-0 nova_compute[189296]: 2025-11-28 18:02:08.851 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:11 compute-0 podman[241451]: 2025-11-28 18:02:11.996646017 +0000 UTC m=+0.061916312 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 18:02:12 compute-0 podman[241453]: 2025-11-28 18:02:12.031853995 +0000 UTC m=+0.089493784 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:02:12 compute-0 podman[241452]: 2025-11-28 18:02:12.03533076 +0000 UTC m=+0.096961986 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true)
Nov 28 18:02:12 compute-0 nova_compute[189296]: 2025-11-28 18:02:12.956 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:12 compute-0 nova_compute[189296]: 2025-11-28 18:02:12.958 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:12 compute-0 nova_compute[189296]: 2025-11-28 18:02:12.984 189300 DEBUG nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.056 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.057 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.065 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.066 189300 INFO nova.compute.claims [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.094 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.097 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.227 189300 DEBUG nova.compute.provider_tree [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.249 189300 DEBUG nova.scheduler.client.report [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.273 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.274 189300 DEBUG nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.326 189300 DEBUG nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.327 189300 DEBUG nova.network.neutron [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.344 189300 INFO nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.371 189300 DEBUG nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.440 189300 DEBUG nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.441 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.442 189300 INFO nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Creating image(s)#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.442 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.443 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.444 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.455 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.516 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.517 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.518 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.528 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.621 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.622 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598,backing_fmt=raw /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.663 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598,backing_fmt=raw /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.664 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.665 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.730 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.732 189300 DEBUG nova.virt.disk.api [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Checking if we can resize image /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.732 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.789 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.791 189300 DEBUG nova.virt.disk.api [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Cannot resize image /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.792 189300 DEBUG nova.objects.instance [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.813 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.814 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.815 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.826 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.879 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.880 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.881 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.892 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.945 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.946 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.984 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.eph0 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.985 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:13 compute-0 nova_compute[189296]: 2025-11-28 18:02:13.986 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:14 compute-0 nova_compute[189296]: 2025-11-28 18:02:14.041 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:14 compute-0 nova_compute[189296]: 2025-11-28 18:02:14.044 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:02:14 compute-0 nova_compute[189296]: 2025-11-28 18:02:14.045 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Ensure instance console log exists: /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:02:14 compute-0 nova_compute[189296]: 2025-11-28 18:02:14.046 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:14 compute-0 nova_compute[189296]: 2025-11-28 18:02:14.048 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:14 compute-0 nova_compute[189296]: 2025-11-28 18:02:14.049 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.460 189300 DEBUG nova.network.neutron [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Successfully updated port: 8a4718af-d672-4453-91df-ba01f3157931 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.478 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "refresh_cache-6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.479 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquired lock "refresh_cache-6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.480 189300 DEBUG nova.network.neutron [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.587 189300 DEBUG nova.compute.manager [req-1a46ac66-24a1-4f5d-8103-907622a0de36 req-0cba1ac8-c5f6-45ab-a29c-c6a6870a5514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Received event network-changed-8a4718af-d672-4453-91df-ba01f3157931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.588 189300 DEBUG nova.compute.manager [req-1a46ac66-24a1-4f5d-8103-907622a0de36 req-0cba1ac8-c5f6-45ab-a29c-c6a6870a5514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Refreshing instance network info cache due to event network-changed-8a4718af-d672-4453-91df-ba01f3157931. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.588 189300 DEBUG oslo_concurrency.lockutils [req-1a46ac66-24a1-4f5d-8103-907622a0de36 req-0cba1ac8-c5f6-45ab-a29c-c6a6870a5514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.728 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.729 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.743 189300 DEBUG nova.network.neutron [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.755 189300 DEBUG nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.854 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.856 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.867 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:02:16 compute-0 nova_compute[189296]: 2025-11-28 18:02:16.868 189300 INFO nova.compute.claims [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:02:17 compute-0 podman[241532]: 2025-11-28 18:02:17.00770839 +0000 UTC m=+0.070338727 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Nov 28 18:02:17 compute-0 podman[241531]: 2025-11-28 18:02:17.017519469 +0000 UTC m=+0.083477737 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.049 189300 DEBUG nova.compute.provider_tree [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.070 189300 DEBUG nova.scheduler.client.report [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.090 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.090 189300 DEBUG nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.132 189300 DEBUG nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.133 189300 DEBUG nova.network.neutron [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.153 189300 INFO nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.184 189300 DEBUG nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.261 189300 DEBUG nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.262 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.262 189300 INFO nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Creating image(s)#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.263 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.263 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.264 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.277 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.359 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.360 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.362 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.383 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.442 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.443 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598,backing_fmt=raw /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.484 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598,backing_fmt=raw /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.486 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.487 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.546 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.548 189300 DEBUG nova.virt.disk.api [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Checking if we can resize image /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.548 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.588 189300 DEBUG nova.network.neutron [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Updating instance_info_cache with network_info: [{"id": "8a4718af-d672-4453-91df-ba01f3157931", "address": "fa:16:3e:e8:a3:93", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.228", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a4718af-d6", "ovs_interfaceid": "8a4718af-d672-4453-91df-ba01f3157931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.609 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.610 189300 DEBUG nova.virt.disk.api [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Cannot resize image /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.611 189300 DEBUG nova.objects.instance [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'migration_context' on Instance uuid fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.627 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Releasing lock "refresh_cache-6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.628 189300 DEBUG nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Instance network_info: |[{"id": "8a4718af-d672-4453-91df-ba01f3157931", "address": "fa:16:3e:e8:a3:93", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.228", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a4718af-d6", "ovs_interfaceid": "8a4718af-d672-4453-91df-ba01f3157931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.629 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.630 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.631 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.643 189300 DEBUG oslo_concurrency.lockutils [req-1a46ac66-24a1-4f5d-8103-907622a0de36 req-0cba1ac8-c5f6-45ab-a29c-c6a6870a5514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.643 189300 DEBUG nova.network.neutron [req-1a46ac66-24a1-4f5d-8103-907622a0de36 req-0cba1ac8-c5f6-45ab-a29c-c6a6870a5514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Refreshing network info cache for port 8a4718af-d672-4453-91df-ba01f3157931 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.647 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Start _get_guest_xml network_info=[{"id": "8a4718af-d672-4453-91df-ba01f3157931", "address": "fa:16:3e:e8:a3:93", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.228", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a4718af-d6", "ovs_interfaceid": "8a4718af-d672-4453-91df-ba01f3157931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T17:54:35Z,direct_url=<?>,disk_format='qcow2',id=f54c2688-82d2-4cd3-8c3b-96e774162948,min_disk=0,min_ram=0,name='cirros',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T17:54:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}], 'ephemerals': [{'device_type': 'disk', 'guest_format': None, 'size': 1, 'encryption_options': None, 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.648 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.673 189300 WARNING nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.680 189300 DEBUG nova.virt.libvirt.host [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.681 189300 DEBUG nova.virt.libvirt.host [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.686 189300 DEBUG nova.virt.libvirt.host [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.687 189300 DEBUG nova.virt.libvirt.host [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.687 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.688 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T17:54:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='e125fa74-9e9f-47dc-8c8e-699980f99f10',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T17:54:35Z,direct_url=<?>,disk_format='qcow2',id=f54c2688-82d2-4cd3-8c3b-96e774162948,min_disk=0,min_ram=0,name='cirros',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T17:54:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.688 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.688 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.689 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.689 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.689 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.689 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.690 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.690 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.690 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.691 189300 DEBUG nova.virt.hardware [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.694 189300 DEBUG nova.virt.libvirt.vif [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:02:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7knpyto-qenf7da4luz4-6vcrszb4rezp-vnf-363khc3uljnu',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-qenf7da4luz4-6vcrszb4rezp-vnf-363khc3uljnu',id=3,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-dw8rnfar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:02:13Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTkwNDAyMTQ4OTI2NjE2ODM2NjY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09OTA0MDIxNDg5MjY2MTY4MzY2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTkwNDAyMTQ4OTI2NjE2ODM2NjY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 28 18:02:17 compute-0 nova_compute[189296]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09OTA0MDIxNDg5MjY2MTY4MzY2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTkwNDAyMTQ4OTI2NjE2ODM2NjY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=6b9c0462-2408-4f6c-ae23-4cff0d9ef19d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a4718af-d672-4453-91df-ba01f3157931", "address": "fa:16:3e:e8:a3:93", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.228", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a4718af-d6", "ovs_interfaceid": "8a4718af-d672-4453-91df-ba01f3157931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.694 189300 DEBUG nova.network.os_vif_util [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "8a4718af-d672-4453-91df-ba01f3157931", "address": "fa:16:3e:e8:a3:93", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.228", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a4718af-d6", "ovs_interfaceid": "8a4718af-d672-4453-91df-ba01f3157931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.695 189300 DEBUG nova.network.os_vif_util [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a3:93,bridge_name='br-int',has_traffic_filtering=True,id=8a4718af-d672-4453-91df-ba01f3157931,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8a4718af-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.696 189300 DEBUG nova.objects.instance [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.705 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.706 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.706 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.717 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.732 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <uuid>6b9c0462-2408-4f6c-ae23-4cff0d9ef19d</uuid>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <name>instance-00000003</name>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <memory>524288</memory>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <nova:name>vn-7knpyto-qenf7da4luz4-6vcrszb4rezp-vnf-363khc3uljnu</nova:name>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:02:17</nova:creationTime>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <nova:flavor name="m1.small">
Nov 28 18:02:17 compute-0 nova_compute[189296]:        <nova:memory>512</nova:memory>
Nov 28 18:02:17 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:02:17 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:02:17 compute-0 nova_compute[189296]:        <nova:ephemeral>1</nova:ephemeral>
Nov 28 18:02:17 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:02:17 compute-0 nova_compute[189296]:        <nova:user uuid="6a35450c34a344b1a4e63aae1be2b971">admin</nova:user>
Nov 28 18:02:17 compute-0 nova_compute[189296]:        <nova:project uuid="79ee04b003ca4eb8a045699c7852a8b0">admin</nova:project>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="f54c2688-82d2-4cd3-8c3b-96e774162948"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:02:17 compute-0 nova_compute[189296]:        <nova:port uuid="8a4718af-d672-4453-91df-ba01f3157931">
Nov 28 18:02:17 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="192.168.0.228" ipVersion="4"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <system>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <entry name="serial">6b9c0462-2408-4f6c-ae23-4cff0d9ef19d</entry>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <entry name="uuid">6b9c0462-2408-4f6c-ae23-4cff0d9ef19d</entry>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    </system>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <os>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  </os>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <features>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  </features>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.eph0"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <target dev="vdb" bus="virtio"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.config"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:e8:a3:93"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <target dev="tap8a4718af-d6"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/console.log" append="off"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <video>
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    </video>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:02:17 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:02:17 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:02:17 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:02:17 compute-0 nova_compute[189296]: </domain>
Nov 28 18:02:17 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.734 189300 DEBUG nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Preparing to wait for external event network-vif-plugged-8a4718af-d672-4453-91df-ba01f3157931 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.734 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.735 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.735 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.736 189300 DEBUG nova.virt.libvirt.vif [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:02:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7knpyto-qenf7da4luz4-6vcrszb4rezp-vnf-363khc3uljnu',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-qenf7da4luz4-6vcrszb4rezp-vnf-363khc3uljnu',id=3,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-dw8rnfar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:02:13Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTkwNDAyMTQ4OTI2NjE2ODM2NjY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09OTA0MDIxNDg5MjY2MTY4MzY2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTkwNDAyMTQ4OTI2NjE2ODM2NjY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 28 18:02:17 compute-0 nova_compute[189296]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09OTA0MDIxNDg5MjY2MTY4MzY2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTkwNDAyMTQ4OTI2NjE2ODM2NjY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=6b9c0462-2408-4f6c-ae23-4cff0d9ef19d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a4718af-d672-4453-91df-ba01f3157931", "address": "fa:16:3e:e8:a3:93", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.228", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a4718af-d6", "ovs_interfaceid": "8a4718af-d672-4453-91df-ba01f3157931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.736 189300 DEBUG nova.network.os_vif_util [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "8a4718af-d672-4453-91df-ba01f3157931", "address": "fa:16:3e:e8:a3:93", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.228", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a4718af-d6", "ovs_interfaceid": "8a4718af-d672-4453-91df-ba01f3157931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.737 189300 DEBUG nova.network.os_vif_util [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a3:93,bridge_name='br-int',has_traffic_filtering=True,id=8a4718af-d672-4453-91df-ba01f3157931,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8a4718af-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.738 189300 DEBUG os_vif [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a3:93,bridge_name='br-int',has_traffic_filtering=True,id=8a4718af-d672-4453-91df-ba01f3157931,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8a4718af-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.738 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.739 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.739 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.742 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.742 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a4718af-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.743 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a4718af-d6, col_values=(('external_ids', {'iface-id': '8a4718af-d672-4453-91df-ba01f3157931', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:a3:93', 'vm-uuid': '6b9c0462-2408-4f6c-ae23-4cff0d9ef19d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.744 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:17 compute-0 NetworkManager[56307]: <info>  [1764352937.7457] manager: (tap8a4718af-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.747 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.754 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.755 189300 INFO os_vif [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a3:93,bridge_name='br-int',has_traffic_filtering=True,id=8a4718af-d672-4453-91df-ba01f3157931,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8a4718af-d6')#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.775 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.776 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.809 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.810 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.810 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.810 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No VIF found with MAC fa:16:3e:e8:a3:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.811 189300 INFO nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Using config drive#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.813 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.814 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.814 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:17 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 18:02:17.694 189300 DEBUG nova.virt.libvirt.vif [None req-0efc5d12-9c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 18:02:17 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 18:02:17.736 189300 DEBUG nova.virt.libvirt.vif [None req-0efc5d12-9c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.876 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.877 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.877 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Ensure instance console log exists: /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.878 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.879 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:17 compute-0 nova_compute[189296]: 2025-11-28 18:02:17.879 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:18 compute-0 nova_compute[189296]: 2025-11-28 18:02:18.097 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:18 compute-0 nova_compute[189296]: 2025-11-28 18:02:18.973 189300 INFO nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Creating config drive at /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.config#033[00m
Nov 28 18:02:18 compute-0 nova_compute[189296]: 2025-11-28 18:02:18.985 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxju69dz1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.111 189300 DEBUG oslo_concurrency.processutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxju69dz1" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:19 compute-0 kernel: tap8a4718af-d6: entered promiscuous mode
Nov 28 18:02:19 compute-0 NetworkManager[56307]: <info>  [1764352939.2027] manager: (tap8a4718af-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 28 18:02:19 compute-0 ovn_controller[97771]: 2025-11-28T18:02:19Z|00040|binding|INFO|Claiming lport 8a4718af-d672-4453-91df-ba01f3157931 for this chassis.
Nov 28 18:02:19 compute-0 ovn_controller[97771]: 2025-11-28T18:02:19Z|00041|binding|INFO|8a4718af-d672-4453-91df-ba01f3157931: Claiming fa:16:3e:e8:a3:93 192.168.0.228
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.207 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.221 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:a3:93 192.168.0.228'], port_security=['fa:16:3e:e8:a3:93 192.168.0.228'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-po7lv7knpyto-qenf7da4luz4-6vcrszb4rezp-port-dquh4cc5fhnl', 'neutron:cidrs': '192.168.0.228/24', 'neutron:device_id': '6b9c0462-2408-4f6c-ae23-4cff0d9ef19d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-po7lv7knpyto-qenf7da4luz4-6vcrszb4rezp-port-dquh4cc5fhnl', 'neutron:project_id': '79ee04b003ca4eb8a045699c7852a8b0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a309e23b-efb6-4377-8050-5a658324ee07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37710b57-0bdd-4c1a-aa8d-366aa83fbf51, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=8a4718af-d672-4453-91df-ba01f3157931) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.223 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 8a4718af-d672-4453-91df-ba01f3157931 in datapath 5cc11a5f-7338-49fd-ba02-2db7ff676c4f bound to our chassis#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.224 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc11a5f-7338-49fd-ba02-2db7ff676c4f#033[00m
Nov 28 18:02:19 compute-0 ovn_controller[97771]: 2025-11-28T18:02:19Z|00042|binding|INFO|Setting lport 8a4718af-d672-4453-91df-ba01f3157931 ovn-installed in OVS
Nov 28 18:02:19 compute-0 ovn_controller[97771]: 2025-11-28T18:02:19Z|00043|binding|INFO|Setting lport 8a4718af-d672-4453-91df-ba01f3157931 up in Southbound
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.237 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.240 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c3598e-02e7-40a3-89e9-6c3facdad014]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:19 compute-0 systemd-machined[155703]: New machine qemu-3-instance-00000003.
Nov 28 18:02:19 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.275 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd6e4c1-f3a5-4633-86b9-8c660704b2f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.280 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1f8bd8-decf-4c96-9c1c-baffe70043cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:19 compute-0 podman[241607]: 2025-11-28 18:02:19.283654062 +0000 UTC m=+0.097257553 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-container, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, name=ubi9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Nov 28 18:02:19 compute-0 systemd-udevd[241650]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.288 189300 DEBUG nova.network.neutron [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Successfully updated port: 7b3b067b-5dff-4342-98fa-c66e054d025d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:02:19 compute-0 NetworkManager[56307]: <info>  [1764352939.3045] device (tap8a4718af-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:02:19 compute-0 NetworkManager[56307]: <info>  [1764352939.3054] device (tap8a4718af-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.305 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.305 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquired lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.305 189300 DEBUG nova.network.neutron [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.314 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[6e53cfa1-23a8-48e3-8d6f-402a16d65058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:19 compute-0 podman[241606]: 2025-11-28 18:02:19.329861719 +0000 UTC m=+0.142390365 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.330 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[51f7836f-efc7-46bf-9578-3acee7ea7f10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc11a5f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:38:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370971, 'reachable_time': 41615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241667, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.346 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d7dd6c91-97c5-434a-8719-db2269f1b8db]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370983, 'tstamp': 370983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241668, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370986, 'tstamp': 370986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241668, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.348 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc11a5f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.349 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.350 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.351 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc11a5f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.351 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.352 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc11a5f-70, col_values=(('external_ids', {'iface-id': '467e3797-177d-4174-b963-0efbd15595b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:19.353 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.370 189300 DEBUG nova.compute.manager [req-0dede024-1065-4825-8d7b-db0952395ecb req-1941f547-047a-4971-869b-e769795cade4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Received event network-changed-7b3b067b-5dff-4342-98fa-c66e054d025d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.370 189300 DEBUG nova.compute.manager [req-0dede024-1065-4825-8d7b-db0952395ecb req-1941f547-047a-4971-869b-e769795cade4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Refreshing instance network info cache due to event network-changed-7b3b067b-5dff-4342-98fa-c66e054d025d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.370 189300 DEBUG oslo_concurrency.lockutils [req-0dede024-1065-4825-8d7b-db0952395ecb req-1941f547-047a-4971-869b-e769795cade4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.625 189300 DEBUG nova.network.neutron [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.928 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352939.927849, 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.929 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] VM Started (Lifecycle Event)#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.946 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.951 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352939.92904, 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.952 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.968 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.973 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:02:19 compute-0 nova_compute[189296]: 2025-11-28 18:02:19.992 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:02:20 compute-0 nova_compute[189296]: 2025-11-28 18:02:20.533 189300 DEBUG nova.network.neutron [req-1a46ac66-24a1-4f5d-8103-907622a0de36 req-0cba1ac8-c5f6-45ab-a29c-c6a6870a5514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Updated VIF entry in instance network info cache for port 8a4718af-d672-4453-91df-ba01f3157931. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:02:20 compute-0 nova_compute[189296]: 2025-11-28 18:02:20.534 189300 DEBUG nova.network.neutron [req-1a46ac66-24a1-4f5d-8103-907622a0de36 req-0cba1ac8-c5f6-45ab-a29c-c6a6870a5514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Updating instance_info_cache with network_info: [{"id": "8a4718af-d672-4453-91df-ba01f3157931", "address": "fa:16:3e:e8:a3:93", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.228", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a4718af-d6", "ovs_interfaceid": "8a4718af-d672-4453-91df-ba01f3157931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:02:20 compute-0 nova_compute[189296]: 2025-11-28 18:02:20.550 189300 DEBUG oslo_concurrency.lockutils [req-1a46ac66-24a1-4f5d-8103-907622a0de36 req-0cba1ac8-c5f6-45ab-a29c-c6a6870a5514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.458 189300 DEBUG nova.compute.manager [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Received event network-vif-plugged-8a4718af-d672-4453-91df-ba01f3157931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.459 189300 DEBUG oslo_concurrency.lockutils [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.459 189300 DEBUG oslo_concurrency.lockutils [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.459 189300 DEBUG oslo_concurrency.lockutils [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.460 189300 DEBUG nova.compute.manager [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Processing event network-vif-plugged-8a4718af-d672-4453-91df-ba01f3157931 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.460 189300 DEBUG nova.compute.manager [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Received event network-vif-plugged-8a4718af-d672-4453-91df-ba01f3157931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.460 189300 DEBUG oslo_concurrency.lockutils [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.460 189300 DEBUG oslo_concurrency.lockutils [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.460 189300 DEBUG oslo_concurrency.lockutils [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.461 189300 DEBUG nova.compute.manager [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] No waiting events found dispatching network-vif-plugged-8a4718af-d672-4453-91df-ba01f3157931 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.461 189300 WARNING nova.compute.manager [req-4ed860a7-efc5-442b-b5ae-e27f228d92b4 req-bf44f813-ca40-4c7a-bf66-841f4dd1a44e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Received unexpected event network-vif-plugged-8a4718af-d672-4453-91df-ba01f3157931 for instance with vm_state building and task_state spawning.#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.461 189300 DEBUG nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.466 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352941.46606, 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.467 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.470 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.474 189300 INFO nova.virt.libvirt.driver [-] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Instance spawned successfully.#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.475 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.489 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.500 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.505 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.505 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.506 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.506 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.507 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.507 189300 DEBUG nova.virt.libvirt.driver [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.530 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.565 189300 INFO nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Took 8.13 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.566 189300 DEBUG nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.615 189300 INFO nova.compute.manager [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Took 8.58 seconds to build instance.#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.628 189300 DEBUG oslo_concurrency.lockutils [None req-0efc5d12-9c47-45ae-93ad-86e76d03e550 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.747 189300 DEBUG nova.network.neutron [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Updating instance_info_cache with network_info: [{"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.805 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Releasing lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.806 189300 DEBUG nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Instance network_info: |[{"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.806 189300 DEBUG oslo_concurrency.lockutils [req-0dede024-1065-4825-8d7b-db0952395ecb req-1941f547-047a-4971-869b-e769795cade4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.806 189300 DEBUG nova.network.neutron [req-0dede024-1065-4825-8d7b-db0952395ecb req-1941f547-047a-4971-869b-e769795cade4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Refreshing network info cache for port 7b3b067b-5dff-4342-98fa-c66e054d025d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.809 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Start _get_guest_xml network_info=[{"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T17:54:35Z,direct_url=<?>,disk_format='qcow2',id=f54c2688-82d2-4cd3-8c3b-96e774162948,min_disk=0,min_ram=0,name='cirros',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T17:54:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}], 'ephemerals': [{'device_type': 'disk', 'guest_format': None, 'size': 1, 'encryption_options': None, 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.815 189300 WARNING nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.820 189300 DEBUG nova.virt.libvirt.host [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.821 189300 DEBUG nova.virt.libvirt.host [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.833 189300 DEBUG nova.virt.libvirt.host [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.833 189300 DEBUG nova.virt.libvirt.host [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.834 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.834 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T17:54:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='e125fa74-9e9f-47dc-8c8e-699980f99f10',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T17:54:35Z,direct_url=<?>,disk_format='qcow2',id=f54c2688-82d2-4cd3-8c3b-96e774162948,min_disk=0,min_ram=0,name='cirros',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T17:54:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.834 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.834 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.835 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.835 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.835 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.835 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.835 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.835 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.836 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.836 189300 DEBUG nova.virt.hardware [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.839 189300 DEBUG nova.virt.libvirt.vif [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:02:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv',id=4,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-z06d29og',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:02:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyODY2NTYxNDM0ODA1NTkwNzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI4NjY1NjE0MzQ4MDU1OTA3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyODY2NTYxNDM0ODA1NTkwNzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 28 18:02:21 compute-0 nova_compute[189296]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI4NjY1NjE0MzQ4MDU1OTA3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyODY2NTYxNDM0ODA1NTkwNzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.839 189300 DEBUG nova.network.os_vif_util [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.840 189300 DEBUG nova.network.os_vif_util [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:01:76,bridge_name='br-int',has_traffic_filtering=True,id=7b3b067b-5dff-4342-98fa-c66e054d025d,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7b3b067b-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.841 189300 DEBUG nova.objects.instance [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:02:21 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.855 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <uuid>fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf</uuid>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <name>instance-00000004</name>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <memory>524288</memory>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <nova:name>vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv</nova:name>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:02:21</nova:creationTime>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <nova:flavor name="m1.small">
Nov 28 18:02:21 compute-0 nova_compute[189296]:        <nova:memory>512</nova:memory>
Nov 28 18:02:21 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:02:21 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:02:21 compute-0 nova_compute[189296]:        <nova:ephemeral>1</nova:ephemeral>
Nov 28 18:02:21 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:02:21 compute-0 nova_compute[189296]:        <nova:user uuid="6a35450c34a344b1a4e63aae1be2b971">admin</nova:user>
Nov 28 18:02:21 compute-0 nova_compute[189296]:        <nova:project uuid="79ee04b003ca4eb8a045699c7852a8b0">admin</nova:project>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="f54c2688-82d2-4cd3-8c3b-96e774162948"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:02:21 compute-0 nova_compute[189296]:        <nova:port uuid="7b3b067b-5dff-4342-98fa-c66e054d025d">
Nov 28 18:02:21 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="192.168.0.178" ipVersion="4"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <system>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <entry name="serial">fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf</entry>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <entry name="uuid">fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf</entry>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    </system>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <os>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  </os>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <features>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  </features>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <target dev="vdb" bus="virtio"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.config"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:7e:01:76"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <target dev="tap7b3b067b-5d"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/console.log" append="off"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <video>
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    </video>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:02:21 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:02:21 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:02:21 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:02:21 compute-0 nova_compute[189296]: </domain>
Nov 28 18:02:21 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.855 189300 DEBUG nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Preparing to wait for external event network-vif-plugged-7b3b067b-5dff-4342-98fa-c66e054d025d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.855 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.856 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.856 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.856 189300 DEBUG nova.virt.libvirt.vif [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:02:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv',id=4,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-z06d29og',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:02:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyODY2NTYxNDM0ODA1NTkwNzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI4NjY1NjE0MzQ4MDU1OTA3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyODY2NTYxNDM0ODA1NTkwNzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 28 18:02:21 compute-0 nova_compute[189296]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI4NjY1NjE0MzQ4MDU1OTA3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyODY2NTYxNDM0ODA1NTkwNzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.857 189300 DEBUG nova.network.os_vif_util [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.857 189300 DEBUG nova.network.os_vif_util [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:01:76,bridge_name='br-int',has_traffic_filtering=True,id=7b3b067b-5dff-4342-98fa-c66e054d025d,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7b3b067b-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.858 189300 DEBUG os_vif [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:01:76,bridge_name='br-int',has_traffic_filtering=True,id=7b3b067b-5dff-4342-98fa-c66e054d025d,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7b3b067b-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.858 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.858 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.859 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.862 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.863 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b3b067b-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.863 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b3b067b-5d, col_values=(('external_ids', {'iface-id': '7b3b067b-5dff-4342-98fa-c66e054d025d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:01:76', 'vm-uuid': 'fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:21 compute-0 NetworkManager[56307]: <info>  [1764352941.8663] manager: (tap7b3b067b-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.865 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.867 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.877 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.878 189300 INFO os_vif [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:01:76,bridge_name='br-int',has_traffic_filtering=True,id=7b3b067b-5dff-4342-98fa-c66e054d025d,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7b3b067b-5d')#033[00m
Nov 28 18:02:21 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.938 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.938 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.938 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.938 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No VIF found with MAC fa:16:3e:7e:01:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:02:21 compute-0 nova_compute[189296]: 2025-11-28 18:02:21.939 189300 INFO nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Using config drive#033[00m
Nov 28 18:02:21 compute-0 podman[241678]: 2025-11-28 18:02:21.995419174 +0000 UTC m=+0.133971019 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 28 18:02:22 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 18:02:21.839 189300 DEBUG nova.virt.libvirt.vif [None req-245fa33d-80 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 18:02:22 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 18:02:21.856 189300 DEBUG nova.virt.libvirt.vif [None req-245fa33d-80 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 18:02:22 compute-0 nova_compute[189296]: 2025-11-28 18:02:22.694 189300 INFO nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Creating config drive at /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.config#033[00m
Nov 28 18:02:22 compute-0 nova_compute[189296]: 2025-11-28 18:02:22.703 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1gepdl2n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:02:22 compute-0 nova_compute[189296]: 2025-11-28 18:02:22.841 189300 DEBUG oslo_concurrency.processutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1gepdl2n" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:02:22 compute-0 kernel: tap7b3b067b-5d: entered promiscuous mode
Nov 28 18:02:22 compute-0 NetworkManager[56307]: <info>  [1764352942.9333] manager: (tap7b3b067b-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 28 18:02:22 compute-0 ovn_controller[97771]: 2025-11-28T18:02:22Z|00044|binding|INFO|Claiming lport 7b3b067b-5dff-4342-98fa-c66e054d025d for this chassis.
Nov 28 18:02:22 compute-0 ovn_controller[97771]: 2025-11-28T18:02:22Z|00045|binding|INFO|7b3b067b-5dff-4342-98fa-c66e054d025d: Claiming fa:16:3e:7e:01:76 192.168.0.178
Nov 28 18:02:22 compute-0 nova_compute[189296]: 2025-11-28 18:02:22.935 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:22.951 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:01:76 192.168.0.178'], port_security=['fa:16:3e:7e:01:76 192.168.0.178'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-po7lv7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-port-25v5lqpwleyb', 'neutron:cidrs': '192.168.0.178/24', 'neutron:device_id': 'fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-po7lv7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-port-25v5lqpwleyb', 'neutron:project_id': '79ee04b003ca4eb8a045699c7852a8b0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a309e23b-efb6-4377-8050-5a658324ee07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37710b57-0bdd-4c1a-aa8d-366aa83fbf51, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=7b3b067b-5dff-4342-98fa-c66e054d025d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:02:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:22.952 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3b067b-5dff-4342-98fa-c66e054d025d in datapath 5cc11a5f-7338-49fd-ba02-2db7ff676c4f bound to our chassis#033[00m
Nov 28 18:02:22 compute-0 ovn_controller[97771]: 2025-11-28T18:02:22Z|00046|binding|INFO|Setting lport 7b3b067b-5dff-4342-98fa-c66e054d025d ovn-installed in OVS
Nov 28 18:02:22 compute-0 ovn_controller[97771]: 2025-11-28T18:02:22Z|00047|binding|INFO|Setting lport 7b3b067b-5dff-4342-98fa-c66e054d025d up in Southbound
Nov 28 18:02:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:22.953 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc11a5f-7338-49fd-ba02-2db7ff676c4f#033[00m
Nov 28 18:02:22 compute-0 nova_compute[189296]: 2025-11-28 18:02:22.954 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:22 compute-0 nova_compute[189296]: 2025-11-28 18:02:22.957 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:22.968 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5b5f98-ecd6-4110-846f-e55c50b49df9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:22 compute-0 systemd-udevd[241746]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:02:22 compute-0 NetworkManager[56307]: <info>  [1764352942.9860] device (tap7b3b067b-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:02:22 compute-0 NetworkManager[56307]: <info>  [1764352942.9869] device (tap7b3b067b-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:02:22 compute-0 systemd-machined[155703]: New machine qemu-4-instance-00000004.
Nov 28 18:02:23 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 28 18:02:23 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:23.004 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[74d637b9-7841-42ab-a209-7021b0a29294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:23 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:23.007 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2986f0-102d-404b-a128-41aa83ed262c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:23 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:23.032 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[8edf482b-6323-4575-8954-8d5350b398d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:23 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:23.049 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[01d460b8-8231-437f-a621-88be9f470a56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc11a5f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:38:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370971, 'reachable_time': 41615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241755, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:23 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:23.066 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcdf120-f6ce-473d-b798-4fad491098e1]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370983, 'tstamp': 370983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241759, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370986, 'tstamp': 370986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241759, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:23 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:23.067 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc11a5f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.069 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.070 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:23 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:23.070 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc11a5f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:23 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:23.071 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:02:23 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:23.071 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc11a5f-70, col_values=(('external_ids', {'iface-id': '467e3797-177d-4174-b963-0efbd15595b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:23 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:23.071 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.099 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.610 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352943.6096785, fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.611 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] VM Started (Lifecycle Event)#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.641 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.646 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352943.60993, fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.646 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.661 189300 DEBUG nova.compute.manager [req-db3e583a-39db-4533-9566-e4804b50c4ab req-08f875a4-d730-4b3d-8efc-40b070f10192 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Received event network-vif-plugged-7b3b067b-5dff-4342-98fa-c66e054d025d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.662 189300 DEBUG oslo_concurrency.lockutils [req-db3e583a-39db-4533-9566-e4804b50c4ab req-08f875a4-d730-4b3d-8efc-40b070f10192 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.662 189300 DEBUG oslo_concurrency.lockutils [req-db3e583a-39db-4533-9566-e4804b50c4ab req-08f875a4-d730-4b3d-8efc-40b070f10192 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.662 189300 DEBUG oslo_concurrency.lockutils [req-db3e583a-39db-4533-9566-e4804b50c4ab req-08f875a4-d730-4b3d-8efc-40b070f10192 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.663 189300 DEBUG nova.compute.manager [req-db3e583a-39db-4533-9566-e4804b50c4ab req-08f875a4-d730-4b3d-8efc-40b070f10192 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Processing event network-vif-plugged-7b3b067b-5dff-4342-98fa-c66e054d025d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.663 189300 DEBUG nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.669 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.670 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.675 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764352943.6674063, fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.676 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.679 189300 INFO nova.virt.libvirt.driver [-] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Instance spawned successfully.#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.680 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.706 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.715 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.717 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.718 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.718 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.719 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.720 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.720 189300 DEBUG nova.virt.libvirt.driver [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.761 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.785 189300 INFO nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Took 6.52 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.786 189300 DEBUG nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.859 189300 INFO nova.compute.manager [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Took 7.02 seconds to build instance.#033[00m
Nov 28 18:02:23 compute-0 nova_compute[189296]: 2025-11-28 18:02:23.877 189300 DEBUG oslo_concurrency.lockutils [None req-245fa33d-8066-483c-9dab-65b0a179160b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:24 compute-0 nova_compute[189296]: 2025-11-28 18:02:24.486 189300 DEBUG nova.network.neutron [req-0dede024-1065-4825-8d7b-db0952395ecb req-1941f547-047a-4971-869b-e769795cade4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Updated VIF entry in instance network info cache for port 7b3b067b-5dff-4342-98fa-c66e054d025d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:02:24 compute-0 nova_compute[189296]: 2025-11-28 18:02:24.487 189300 DEBUG nova.network.neutron [req-0dede024-1065-4825-8d7b-db0952395ecb req-1941f547-047a-4971-869b-e769795cade4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Updating instance_info_cache with network_info: [{"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:02:24 compute-0 nova_compute[189296]: 2025-11-28 18:02:24.514 189300 DEBUG oslo_concurrency.lockutils [req-0dede024-1065-4825-8d7b-db0952395ecb req-1941f547-047a-4971-869b-e769795cade4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:02:25 compute-0 nova_compute[189296]: 2025-11-28 18:02:25.806 189300 DEBUG nova.compute.manager [req-de4e6ef9-701b-4dfe-9d99-fe4dd703abca req-95ad0c30-bef0-44e0-8207-bb6994382b9f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Received event network-vif-plugged-7b3b067b-5dff-4342-98fa-c66e054d025d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:02:25 compute-0 nova_compute[189296]: 2025-11-28 18:02:25.807 189300 DEBUG oslo_concurrency.lockutils [req-de4e6ef9-701b-4dfe-9d99-fe4dd703abca req-95ad0c30-bef0-44e0-8207-bb6994382b9f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:25 compute-0 nova_compute[189296]: 2025-11-28 18:02:25.807 189300 DEBUG oslo_concurrency.lockutils [req-de4e6ef9-701b-4dfe-9d99-fe4dd703abca req-95ad0c30-bef0-44e0-8207-bb6994382b9f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:25 compute-0 nova_compute[189296]: 2025-11-28 18:02:25.808 189300 DEBUG oslo_concurrency.lockutils [req-de4e6ef9-701b-4dfe-9d99-fe4dd703abca req-95ad0c30-bef0-44e0-8207-bb6994382b9f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:25 compute-0 nova_compute[189296]: 2025-11-28 18:02:25.808 189300 DEBUG nova.compute.manager [req-de4e6ef9-701b-4dfe-9d99-fe4dd703abca req-95ad0c30-bef0-44e0-8207-bb6994382b9f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] No waiting events found dispatching network-vif-plugged-7b3b067b-5dff-4342-98fa-c66e054d025d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:02:25 compute-0 nova_compute[189296]: 2025-11-28 18:02:25.808 189300 WARNING nova.compute.manager [req-de4e6ef9-701b-4dfe-9d99-fe4dd703abca req-95ad0c30-bef0-44e0-8207-bb6994382b9f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Received unexpected event network-vif-plugged-7b3b067b-5dff-4342-98fa-c66e054d025d for instance with vm_state active and task_state None.#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.130 189300 DEBUG oslo_concurrency.lockutils [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.131 189300 DEBUG oslo_concurrency.lockutils [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.131 189300 DEBUG oslo_concurrency.lockutils [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.131 189300 DEBUG oslo_concurrency.lockutils [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.132 189300 DEBUG oslo_concurrency.lockutils [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.133 189300 INFO nova.compute.manager [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Terminating instance#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.134 189300 DEBUG nova.compute.manager [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:02:26 compute-0 kernel: tap8a4718af-d6 (unregistering): left promiscuous mode
Nov 28 18:02:26 compute-0 NetworkManager[56307]: <info>  [1764352946.1611] device (tap8a4718af-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:02:26 compute-0 ovn_controller[97771]: 2025-11-28T18:02:26Z|00048|binding|INFO|Releasing lport 8a4718af-d672-4453-91df-ba01f3157931 from this chassis (sb_readonly=0)
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.164 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:26 compute-0 ovn_controller[97771]: 2025-11-28T18:02:26Z|00049|binding|INFO|Setting lport 8a4718af-d672-4453-91df-ba01f3157931 down in Southbound
Nov 28 18:02:26 compute-0 ovn_controller[97771]: 2025-11-28T18:02:26Z|00050|binding|INFO|Removing iface tap8a4718af-d6 ovn-installed in OVS
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.169 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.174 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:a3:93 192.168.0.228'], port_security=['fa:16:3e:e8:a3:93 192.168.0.228'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-po7lv7knpyto-qenf7da4luz4-6vcrszb4rezp-port-dquh4cc5fhnl', 'neutron:cidrs': '192.168.0.228/24', 'neutron:device_id': '6b9c0462-2408-4f6c-ae23-4cff0d9ef19d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-po7lv7knpyto-qenf7da4luz4-6vcrszb4rezp-port-dquh4cc5fhnl', 'neutron:project_id': '79ee04b003ca4eb8a045699c7852a8b0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a309e23b-efb6-4377-8050-5a658324ee07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.214', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37710b57-0bdd-4c1a-aa8d-366aa83fbf51, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=8a4718af-d672-4453-91df-ba01f3157931) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.175 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 8a4718af-d672-4453-91df-ba01f3157931 in datapath 5cc11a5f-7338-49fd-ba02-2db7ff676c4f unbound from our chassis#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.176 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc11a5f-7338-49fd-ba02-2db7ff676c4f#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.192 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.195 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[9b518131-facc-409c-9d36-a319826b9b6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:26 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 28 18:02:26 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 5.473s CPU time.
Nov 28 18:02:26 compute-0 systemd-machined[155703]: Machine qemu-3-instance-00000003 terminated.
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.231 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[62d074b7-fb1d-4568-a47b-fd5d2a88bb19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.234 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[cad89ca4-3570-4509-9e9a-21012235b4c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.280 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d143d9-a0da-4723-8cd3-48cbab1ed36c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.298 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[6a89755e-8870-429e-a6ba-857b5282e264]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc11a5f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:38:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370971, 'reachable_time': 41615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241782, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.314 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[31362e7a-dd63-4c0c-bc5e-7209600669df]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370983, 'tstamp': 370983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241783, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370986, 'tstamp': 370986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241783, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.316 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc11a5f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.318 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.323 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.324 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc11a5f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.324 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.325 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc11a5f-70, col_values=(('external_ids', {'iface-id': '467e3797-177d-4174-b963-0efbd15595b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:26 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:26.325 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.354 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.359 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.421 189300 INFO nova.virt.libvirt.driver [-] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Instance destroyed successfully.#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.423 189300 DEBUG nova.objects.instance [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'resources' on Instance uuid 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.437 189300 DEBUG nova.virt.libvirt.vif [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:02:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-7knpyto-qenf7da4luz4-6vcrszb4rezp-vnf-363khc3uljnu',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-qenf7da4luz4-6vcrszb4rezp-vnf-363khc3uljnu',id=3,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:02:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-dw8rnfar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:02:21Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTkwNDAyMTQ4OTI2NjE2ODM2NjY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09OTA0MDIxNDg5MjY2MTY4MzY2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTkwNDAyMTQ4OTI2NjE2ODM2NjY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 28 18:02:26 compute-0 nova_compute[189296]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09OTA0MDIxNDg5MjY2MTY4MzY2Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTkwNDAyMTQ4OTI2NjE2ODM2NjY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT05MDQwMjE0ODkyNjYxNjgzNjY2PT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=6b9c0462-2408-4f6c-ae23-4cff0d9ef19d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a4718af-d672-4453-91df-ba01f3157931", "address": "fa:16:3e:e8:a3:93", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.228", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a4718af-d6", "ovs_interfaceid": "8a4718af-d672-4453-91df-ba01f3157931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.438 189300 DEBUG nova.network.os_vif_util [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "8a4718af-d672-4453-91df-ba01f3157931", "address": "fa:16:3e:e8:a3:93", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.228", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a4718af-d6", "ovs_interfaceid": "8a4718af-d672-4453-91df-ba01f3157931", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.439 189300 DEBUG nova.network.os_vif_util [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a3:93,bridge_name='br-int',has_traffic_filtering=True,id=8a4718af-d672-4453-91df-ba01f3157931,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8a4718af-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.439 189300 DEBUG os_vif [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a3:93,bridge_name='br-int',has_traffic_filtering=True,id=8a4718af-d672-4453-91df-ba01f3157931,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8a4718af-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.441 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.442 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a4718af-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.444 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.446 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.449 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.451 189300 INFO os_vif [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:a3:93,bridge_name='br-int',has_traffic_filtering=True,id=8a4718af-d672-4453-91df-ba01f3157931,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8a4718af-d6')#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.452 189300 INFO nova.virt.libvirt.driver [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Deleting instance files /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d_del#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.453 189300 INFO nova.virt.libvirt.driver [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Deletion of /var/lib/nova/instances/6b9c0462-2408-4f6c-ae23-4cff0d9ef19d_del complete#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.540 189300 DEBUG nova.virt.libvirt.host [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.541 189300 INFO nova.virt.libvirt.host [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] UEFI support detected#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.543 189300 INFO nova.compute.manager [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.545 189300 DEBUG oslo.service.loopingcall [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.546 189300 DEBUG nova.compute.manager [-] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:02:26 compute-0 nova_compute[189296]: 2025-11-28 18:02:26.546 189300 DEBUG nova.network.neutron [-] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:02:26 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 18:02:26.437 189300 DEBUG nova.virt.libvirt.vif [None req-7317d738-d4 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.882 189300 DEBUG nova.network.neutron [-] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.919 189300 INFO nova.compute.manager [-] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Took 1.37 seconds to deallocate network for instance.#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.920 189300 DEBUG nova.compute.manager [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Received event network-vif-unplugged-8a4718af-d672-4453-91df-ba01f3157931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.921 189300 DEBUG oslo_concurrency.lockutils [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.921 189300 DEBUG oslo_concurrency.lockutils [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.922 189300 DEBUG oslo_concurrency.lockutils [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.922 189300 DEBUG nova.compute.manager [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] No waiting events found dispatching network-vif-unplugged-8a4718af-d672-4453-91df-ba01f3157931 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.922 189300 DEBUG nova.compute.manager [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Received event network-vif-unplugged-8a4718af-d672-4453-91df-ba01f3157931 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.923 189300 DEBUG nova.compute.manager [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Received event network-vif-plugged-8a4718af-d672-4453-91df-ba01f3157931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.923 189300 DEBUG oslo_concurrency.lockutils [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.924 189300 DEBUG oslo_concurrency.lockutils [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.924 189300 DEBUG oslo_concurrency.lockutils [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.925 189300 DEBUG nova.compute.manager [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] No waiting events found dispatching network-vif-plugged-8a4718af-d672-4453-91df-ba01f3157931 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.925 189300 WARNING nova.compute.manager [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Received unexpected event network-vif-plugged-8a4718af-d672-4453-91df-ba01f3157931 for instance with vm_state active and task_state deleting.#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.925 189300 DEBUG nova.compute.manager [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Received event network-changed-8a4718af-d672-4453-91df-ba01f3157931 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.926 189300 DEBUG nova.compute.manager [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Refreshing instance network info cache due to event network-changed-8a4718af-d672-4453-91df-ba01f3157931. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.926 189300 DEBUG oslo_concurrency.lockutils [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.927 189300 DEBUG oslo_concurrency.lockutils [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.928 189300 DEBUG nova.network.neutron [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Refreshing network info cache for port 8a4718af-d672-4453-91df-ba01f3157931 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.971 189300 DEBUG oslo_concurrency.lockutils [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:27 compute-0 nova_compute[189296]: 2025-11-28 18:02:27.972 189300 DEBUG oslo_concurrency.lockutils [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:28 compute-0 nova_compute[189296]: 2025-11-28 18:02:28.054 189300 DEBUG nova.network.neutron [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:02:28 compute-0 nova_compute[189296]: 2025-11-28 18:02:28.088 189300 DEBUG nova.compute.provider_tree [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:02:28 compute-0 nova_compute[189296]: 2025-11-28 18:02:28.101 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:28 compute-0 nova_compute[189296]: 2025-11-28 18:02:28.107 189300 DEBUG nova.scheduler.client.report [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:02:28 compute-0 nova_compute[189296]: 2025-11-28 18:02:28.572 189300 DEBUG oslo_concurrency.lockutils [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:28 compute-0 nova_compute[189296]: 2025-11-28 18:02:28.612 189300 INFO nova.scheduler.client.report [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Deleted allocations for instance 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d#033[00m
Nov 28 18:02:28 compute-0 nova_compute[189296]: 2025-11-28 18:02:28.679 189300 DEBUG oslo_concurrency.lockutils [None req-7317d738-d4e0-4199-8e1f-adce6264e57d 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:28 compute-0 nova_compute[189296]: 2025-11-28 18:02:28.727 189300 DEBUG nova.network.neutron [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:02:28 compute-0 nova_compute[189296]: 2025-11-28 18:02:28.743 189300 DEBUG oslo_concurrency.lockutils [req-7ff211ca-7ba5-44b6-84e7-c85f88ca1498 req-5caedc98-802b-487b-b408-cf5e53213058 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-6b9c0462-2408-4f6c-ae23-4cff0d9ef19d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:02:29 compute-0 podman[203494]: time="2025-11-28T18:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:02:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:02:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4770 "" "Go-http-client/1.1"
Nov 28 18:02:31 compute-0 openstack_network_exporter[205632]: ERROR   18:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:02:31 compute-0 openstack_network_exporter[205632]: ERROR   18:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:02:31 compute-0 openstack_network_exporter[205632]: ERROR   18:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:02:31 compute-0 openstack_network_exporter[205632]: ERROR   18:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:02:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:02:31 compute-0 openstack_network_exporter[205632]: ERROR   18:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:02:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:02:31 compute-0 nova_compute[189296]: 2025-11-28 18:02:31.444 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:33 compute-0 podman[241805]: 2025-11-28 18:02:33.004147227 +0000 UTC m=+0.062321411 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:02:33 compute-0 nova_compute[189296]: 2025-11-28 18:02:33.105 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:36 compute-0 nova_compute[189296]: 2025-11-28 18:02:36.446 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:38 compute-0 nova_compute[189296]: 2025-11-28 18:02:38.107 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:41 compute-0 nova_compute[189296]: 2025-11-28 18:02:41.419 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764352946.4184399, 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:02:41 compute-0 nova_compute[189296]: 2025-11-28 18:02:41.420 189300 INFO nova.compute.manager [-] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:02:41 compute-0 nova_compute[189296]: 2025-11-28 18:02:41.437 189300 DEBUG nova.compute.manager [None req-7af047be-5c88-449b-90c8-ca58cc09e57c - - - - - -] [instance: 6b9c0462-2408-4f6c-ae23-4cff0d9ef19d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:02:41 compute-0 nova_compute[189296]: 2025-11-28 18:02:41.448 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:43 compute-0 podman[241831]: 2025-11-28 18:02:43.014552046 +0000 UTC m=+0.074999089 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 28 18:02:43 compute-0 podman[241830]: 2025-11-28 18:02:43.030517806 +0000 UTC m=+0.091587415 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public)
Nov 28 18:02:43 compute-0 podman[241832]: 2025-11-28 18:02:43.039667549 +0000 UTC m=+0.091757128 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:02:43 compute-0 nova_compute[189296]: 2025-11-28 18:02:43.109 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:46 compute-0 nova_compute[189296]: 2025-11-28 18:02:46.450 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:48 compute-0 podman[241888]: 2025-11-28 18:02:48.006863722 +0000 UTC m=+0.066779320 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:02:48 compute-0 podman[241887]: 2025-11-28 18:02:48.027979477 +0000 UTC m=+0.090423817 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 28 18:02:48 compute-0 nova_compute[189296]: 2025-11-28 18:02:48.111 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:50 compute-0 podman[241924]: 2025-11-28 18:02:50.015748021 +0000 UTC m=+0.074563919 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:02:50 compute-0 podman[241925]: 2025-11-28 18:02:50.022549197 +0000 UTC m=+0.074567949 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, version=9.4, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:02:51 compute-0 nova_compute[189296]: 2025-11-28 18:02:51.452 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.978 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.978 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.978 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.984 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 18:02:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:51.985 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 18:02:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:52.607 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:02:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:52.608 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:02:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:02:52.609 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.620 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Fri, 28 Nov 2025 18:02:52 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-eeae0430-14d9-4a85-be23-d4cf3974d7b7 x-openstack-request-id: req-eeae0430-14d9-4a85-be23-d4cf3974d7b7 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.621 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf", "name": "vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv", "status": "ACTIVE", "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "user_id": "6a35450c34a344b1a4e63aae1be2b971", "metadata": {"metering.server_group": "ac6a0a76-f006-4c50-a4a8-904a1f128161"}, "hostId": "db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651", "image": {"id": "f54c2688-82d2-4cd3-8c3b-96e774162948", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/f54c2688-82d2-4cd3-8c3b-96e774162948"}]}, "flavor": {"id": "e125fa74-9e9f-47dc-8c8e-699980f99f10", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e125fa74-9e9f-47dc-8c8e-699980f99f10"}]}, "created": "2025-11-28T18:02:14Z", "updated": "2025-11-28T18:02:23Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.178", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7e:01:76"}, {"version": 4, "addr": "192.168.122.206", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7e:01:76"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-28T18:02:23.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.621 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf used request id req-eeae0430-14d9-4a85-be23-d4cf3974d7b7 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.622 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf', 'name': 'vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:02:52 compute-0 nova_compute[189296]: 2025-11-28 18:02:52.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:52 compute-0 nova_compute[189296]: 2025-11-28 18:02:52.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.625 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d10f9fc-89ea-4059-8532-7e0aec0791d6', 'name': 'test_0', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.628 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3e7aebb1-2fd3-449c-be21-02c4d1b57717', 'name': 'vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.628 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.629 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.629 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:02:52.629257) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.629 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.656 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.656 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.657 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.682 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.683 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.683 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.704 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.705 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.705 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.706 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.706 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.706 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.707 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.707 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.707 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:02:52.707567) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.784 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.784 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.785 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.847 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.848 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.848 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.908 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.909 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.909 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.910 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.910 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.910 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.911 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.911 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.911 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.914 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.latency volume: 211400486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.914 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.914 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.latency volume: 2059609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.915 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 284678818 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.915 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 69824352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.916 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 37055244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.916 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 321385299 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.917 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 64866438 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:02:52.911705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.917 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 53024748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.918 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.918 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.918 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.918 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.918 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.919 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:02:52.919185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.924 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf / tap7b3b067b-5d inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.924 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.934 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.938 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.939 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.939 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.939 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.939 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.939 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.939 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:02:52.939729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.963 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.964 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf: ceilometer.compute.pollsters.NoVolumeException
Nov 28 18:02:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:52.983 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.002 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.002 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.003 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.003 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.003 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.003 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.003 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.003 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:02:53.003471) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.003 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.004 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.004 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.004 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.004 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.005 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.005 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.005 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.006 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.006 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.006 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.006 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.006 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.006 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.006 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.007 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:02:53.006596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.007 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.007 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.007 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.008 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.008 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.008 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.008 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.009 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.009 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.009 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.009 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.009 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.009 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.010 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:02:53.009895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.011 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.011 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.011 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.011 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.011 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.012 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.012 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.012 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.012 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.012 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.012 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.013 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 646402207 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.013 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 6041958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.013 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.013 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 994862100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:02:53.012256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.014 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 9215217 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.014 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.014 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.015 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.015 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.015 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.015 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.015 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.015 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:02:53.015358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.015 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.015 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.016 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.016 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.016 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.016 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.016 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.017 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.018 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.018 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.018 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.018 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.018 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.018 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:02:53.018707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.018 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.019 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes.delta volume: 252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.019 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.bytes.delta volume: 3599 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.019 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.019 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.019 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.019 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.020 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.020 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.020 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.020 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:02:53.020081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.020 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.021 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.021 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.021 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.021 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.021 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.021 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.021 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/cpu volume: 28530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.022 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/cpu volume: 35760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.022 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/cpu volume: 259520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.022 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.022 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:02:53.021815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.023 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.023 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.023 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.023 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-28T18:02:53.023289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.023 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.023 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv>]
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.023 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.023 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.023 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.024 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.024 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.024 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.024 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.024 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.024 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.025 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.025 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.025 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:02:53.024155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:02:53.025152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.025 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.025 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.026 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.026 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.026 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.026 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.026 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.026 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.026 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.027 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.027 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.027 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.027 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.027 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.027 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:02:53.026503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:02:53.027453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.027 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.027 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.028 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.028 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.028 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.028 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.028 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.028 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.028 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.029 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.029 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.bytes volume: 7592 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.029 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.030 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.030 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.030 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.030 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.030 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.030 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.030 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.031 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.031 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.031 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.031 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.031 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:02:53.028862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:02:53.030387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.032 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.032 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.032 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.032 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.032 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.033 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.033 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.033 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.033 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:02:53.033195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.033 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.033 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.bytes.delta volume: 2742 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.034 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.034 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.034 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.034 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.034 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.034 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.034 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.034 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv>]
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.035 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.035 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-28T18:02:53.034677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.035 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.035 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.035 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.035 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:02:53.035832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.036 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.036 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.036 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.036 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.037 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.037 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.037 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.037 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.037 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.037 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.038 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets volume: 66 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:02:53.037345) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.038 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.038 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.039 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.039 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.039 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.039 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.039 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:02:53.039426) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.040 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.040 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.040 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.040 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.040 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.041 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.041 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.041 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.041 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.041 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.042 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:02:53.041221) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.042 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.042 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.042 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.042 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.043 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.043 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.043 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.044 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.044 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.044 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.044 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.044 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.044 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.044 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.044 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.045 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.046 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.046 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.046 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.046 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.046 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:02:53.046 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:02:53 compute-0 podman[241964]: 2025-11-28 18:02:53.045872306 +0000 UTC m=+0.112615368 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:02:53 compute-0 nova_compute[189296]: 2025-11-28 18:02:53.113 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:56 compute-0 nova_compute[189296]: 2025-11-28 18:02:56.455 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:56 compute-0 ovn_controller[97771]: 2025-11-28T18:02:56Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:01:76 192.168.0.178
Nov 28 18:02:56 compute-0 ovn_controller[97771]: 2025-11-28T18:02:56Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:01:76 192.168.0.178
Nov 28 18:02:56 compute-0 nova_compute[189296]: 2025-11-28 18:02:56.642 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:56 compute-0 nova_compute[189296]: 2025-11-28 18:02:56.643 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 28 18:02:56 compute-0 nova_compute[189296]: 2025-11-28 18:02:56.666 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 28 18:02:58 compute-0 nova_compute[189296]: 2025-11-28 18:02:58.113 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:02:58 compute-0 nova_compute[189296]: 2025-11-28 18:02:58.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:58 compute-0 nova_compute[189296]: 2025-11-28 18:02:58.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:02:58 compute-0 nova_compute[189296]: 2025-11-28 18:02:58.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:02:59 compute-0 podman[203494]: time="2025-11-28T18:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:02:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:02:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4762 "" "Go-http-client/1.1"
Nov 28 18:03:00 compute-0 nova_compute[189296]: 2025-11-28 18:03:00.634 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:03:01 compute-0 openstack_network_exporter[205632]: ERROR   18:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:03:01 compute-0 openstack_network_exporter[205632]: ERROR   18:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:03:01 compute-0 openstack_network_exporter[205632]: ERROR   18:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:03:01 compute-0 openstack_network_exporter[205632]: ERROR   18:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:03:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:03:01 compute-0 openstack_network_exporter[205632]: ERROR   18:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:03:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:03:01 compute-0 nova_compute[189296]: 2025-11-28 18:03:01.458 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:01 compute-0 ovn_controller[97771]: 2025-11-28T18:03:01Z|00051|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Nov 28 18:03:01 compute-0 nova_compute[189296]: 2025-11-28 18:03:01.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:03:01 compute-0 nova_compute[189296]: 2025-11-28 18:03:01.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:03:02 compute-0 nova_compute[189296]: 2025-11-28 18:03:02.469 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:03:02 compute-0 nova_compute[189296]: 2025-11-28 18:03:02.469 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:03:02 compute-0 nova_compute[189296]: 2025-11-28 18:03:02.469 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:03:03 compute-0 nova_compute[189296]: 2025-11-28 18:03:03.117 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:03 compute-0 nova_compute[189296]: 2025-11-28 18:03:03.705 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updating instance_info_cache with network_info: [{"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:03:03 compute-0 nova_compute[189296]: 2025-11-28 18:03:03.721 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:03:03 compute-0 nova_compute[189296]: 2025-11-28 18:03:03.721 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:03:03 compute-0 nova_compute[189296]: 2025-11-28 18:03:03.722 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:03:03 compute-0 nova_compute[189296]: 2025-11-28 18:03:03.723 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:03:04 compute-0 podman[242005]: 2025-11-28 18:03:04.017299523 +0000 UTC m=+0.065416366 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:03:04 compute-0 nova_compute[189296]: 2025-11-28 18:03:04.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.659 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.659 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.659 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.659 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.757 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.818 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.820 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.878 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.879 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.941 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:05 compute-0 nova_compute[189296]: 2025-11-28 18:03:05.942 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.000 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.007 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.064 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.065 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.123 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.124 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.182 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.184 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.244 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.250 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.306 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.307 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.372 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.373 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.432 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.433 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.460 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.491 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.879 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.881 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4810MB free_disk=72.34091567993164GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.882 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:03:06 compute-0 nova_compute[189296]: 2025-11-28 18:03:06.882 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:03:07 compute-0 nova_compute[189296]: 2025-11-28 18:03:07.020 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:03:07 compute-0 nova_compute[189296]: 2025-11-28 18:03:07.021 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 3e7aebb1-2fd3-449c-be21-02c4d1b57717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:03:07 compute-0 nova_compute[189296]: 2025-11-28 18:03:07.021 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:03:07 compute-0 nova_compute[189296]: 2025-11-28 18:03:07.021 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:03:07 compute-0 nova_compute[189296]: 2025-11-28 18:03:07.021 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:03:07 compute-0 nova_compute[189296]: 2025-11-28 18:03:07.183 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:03:07 compute-0 nova_compute[189296]: 2025-11-28 18:03:07.198 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:03:07 compute-0 nova_compute[189296]: 2025-11-28 18:03:07.215 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:03:07 compute-0 nova_compute[189296]: 2025-11-28 18:03:07.215 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:03:08 compute-0 nova_compute[189296]: 2025-11-28 18:03:08.120 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:08 compute-0 nova_compute[189296]: 2025-11-28 18:03:08.215 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:03:08 compute-0 nova_compute[189296]: 2025-11-28 18:03:08.216 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:03:11 compute-0 nova_compute[189296]: 2025-11-28 18:03:11.462 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:13 compute-0 nova_compute[189296]: 2025-11-28 18:03:13.122 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:14 compute-0 podman[242067]: 2025-11-28 18:03:14.007584776 +0000 UTC m=+0.068365219 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-type=git, release=1755695350, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal)
Nov 28 18:03:14 compute-0 podman[242069]: 2025-11-28 18:03:14.009736008 +0000 UTC m=+0.063032478 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 28 18:03:14 compute-0 podman[242068]: 2025-11-28 18:03:14.030551915 +0000 UTC m=+0.088099019 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:03:16 compute-0 nova_compute[189296]: 2025-11-28 18:03:16.464 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:18 compute-0 nova_compute[189296]: 2025-11-28 18:03:18.124 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:19 compute-0 podman[242126]: 2025-11-28 18:03:19.016273999 +0000 UTC m=+0.076630690 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 18:03:19 compute-0 podman[242125]: 2025-11-28 18:03:19.032497886 +0000 UTC m=+0.097841288 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 28 18:03:21 compute-0 podman[242162]: 2025-11-28 18:03:21.004665679 +0000 UTC m=+0.065333444 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:03:21 compute-0 podman[242163]: 2025-11-28 18:03:21.021689933 +0000 UTC m=+0.076328371 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, version=9.4, architecture=x86_64, release-0.7.12=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0)
Nov 28 18:03:21 compute-0 nova_compute[189296]: 2025-11-28 18:03:21.466 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:23 compute-0 nova_compute[189296]: 2025-11-28 18:03:23.128 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:24 compute-0 podman[242205]: 2025-11-28 18:03:24.051598851 +0000 UTC m=+0.108382404 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 28 18:03:26 compute-0 nova_compute[189296]: 2025-11-28 18:03:26.468 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:28 compute-0 nova_compute[189296]: 2025-11-28 18:03:28.130 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:29 compute-0 podman[203494]: time="2025-11-28T18:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:03:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:03:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4767 "" "Go-http-client/1.1"
Nov 28 18:03:31 compute-0 openstack_network_exporter[205632]: ERROR   18:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:03:31 compute-0 openstack_network_exporter[205632]: ERROR   18:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:03:31 compute-0 openstack_network_exporter[205632]: ERROR   18:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:03:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:03:31 compute-0 openstack_network_exporter[205632]: ERROR   18:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:03:31 compute-0 openstack_network_exporter[205632]: ERROR   18:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:03:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:03:31 compute-0 nova_compute[189296]: 2025-11-28 18:03:31.470 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:33 compute-0 nova_compute[189296]: 2025-11-28 18:03:33.132 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:35 compute-0 podman[242229]: 2025-11-28 18:03:35.000393733 +0000 UTC m=+0.061947851 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:03:36 compute-0 nova_compute[189296]: 2025-11-28 18:03:36.472 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:38 compute-0 nova_compute[189296]: 2025-11-28 18:03:38.136 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:41 compute-0 nova_compute[189296]: 2025-11-28 18:03:41.474 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:43 compute-0 nova_compute[189296]: 2025-11-28 18:03:43.137 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:44 compute-0 podman[242255]: 2025-11-28 18:03:44.749442128 +0000 UTC m=+0.065866537 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 28 18:03:44 compute-0 podman[242253]: 2025-11-28 18:03:44.75198268 +0000 UTC m=+0.077808209 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Nov 28 18:03:44 compute-0 podman[242254]: 2025-11-28 18:03:44.777572354 +0000 UTC m=+0.098636876 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 18:03:46 compute-0 nova_compute[189296]: 2025-11-28 18:03:46.476 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:48 compute-0 nova_compute[189296]: 2025-11-28 18:03:48.139 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:50 compute-0 podman[242311]: 2025-11-28 18:03:50.03850491 +0000 UTC m=+0.093598934 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 28 18:03:50 compute-0 podman[242312]: 2025-11-28 18:03:50.052276965 +0000 UTC m=+0.106215280 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 18:03:51 compute-0 nova_compute[189296]: 2025-11-28 18:03:51.479 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:52 compute-0 podman[242351]: 2025-11-28 18:03:52.001067301 +0000 UTC m=+0.063036488 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:03:52 compute-0 podman[242352]: 2025-11-28 18:03:52.038276489 +0000 UTC m=+0.094299741 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Nov 28 18:03:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:03:52.608 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:03:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:03:52.609 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:03:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:03:52.609 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:03:53 compute-0 nova_compute[189296]: 2025-11-28 18:03:53.140 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:55 compute-0 podman[242397]: 2025-11-28 18:03:55.048777525 +0000 UTC m=+0.108420185 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 28 18:03:56 compute-0 nova_compute[189296]: 2025-11-28 18:03:56.480 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:58 compute-0 nova_compute[189296]: 2025-11-28 18:03:58.142 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:03:59 compute-0 podman[203494]: time="2025-11-28T18:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:03:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:03:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4766 "" "Go-http-client/1.1"
Nov 28 18:04:00 compute-0 nova_compute[189296]: 2025-11-28 18:04:00.622 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:04:00 compute-0 nova_compute[189296]: 2025-11-28 18:04:00.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:04:00 compute-0 nova_compute[189296]: 2025-11-28 18:04:00.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:04:01 compute-0 openstack_network_exporter[205632]: ERROR   18:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:04:01 compute-0 openstack_network_exporter[205632]: ERROR   18:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:04:01 compute-0 openstack_network_exporter[205632]: ERROR   18:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:04:01 compute-0 openstack_network_exporter[205632]: ERROR   18:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:04:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:04:01 compute-0 openstack_network_exporter[205632]: ERROR   18:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:04:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:04:01 compute-0 nova_compute[189296]: 2025-11-28 18:04:01.482 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:03 compute-0 nova_compute[189296]: 2025-11-28 18:04:03.143 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:03 compute-0 nova_compute[189296]: 2025-11-28 18:04:03.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:04:03 compute-0 nova_compute[189296]: 2025-11-28 18:04:03.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:04:03 compute-0 nova_compute[189296]: 2025-11-28 18:04:03.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:04:03 compute-0 nova_compute[189296]: 2025-11-28 18:04:03.843 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:04:03 compute-0 nova_compute[189296]: 2025-11-28 18:04:03.844 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:04:03 compute-0 nova_compute[189296]: 2025-11-28 18:04:03.844 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:04:03 compute-0 nova_compute[189296]: 2025-11-28 18:04:03.844 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.322 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.341 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.341 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.342 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.342 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.649 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.650 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.650 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.651 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.746 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.808 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.809 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.872 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.873 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.938 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:05 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.939 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:05 compute-0 podman[242430]: 2025-11-28 18:04:05.997663313 +0000 UTC m=+0.060254940 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:05.999 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.007 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.066 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.067 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.126 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.127 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.187 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.188 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.243 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.249 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.305 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.306 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.364 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.365 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.423 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.424 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.480 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.484 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.810 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.811 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4789MB free_disk=72.34091567993164GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.812 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.812 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.913 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.913 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 3e7aebb1-2fd3-449c-be21-02c4d1b57717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.914 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.914 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:04:06 compute-0 nova_compute[189296]: 2025-11-28 18:04:06.914 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:04:07 compute-0 nova_compute[189296]: 2025-11-28 18:04:07.012 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:04:07 compute-0 nova_compute[189296]: 2025-11-28 18:04:07.035 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:04:07 compute-0 nova_compute[189296]: 2025-11-28 18:04:07.037 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:04:07 compute-0 nova_compute[189296]: 2025-11-28 18:04:07.038 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:08 compute-0 nova_compute[189296]: 2025-11-28 18:04:08.037 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:04:08 compute-0 nova_compute[189296]: 2025-11-28 18:04:08.146 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:08 compute-0 nova_compute[189296]: 2025-11-28 18:04:08.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:04:08 compute-0 nova_compute[189296]: 2025-11-28 18:04:08.653 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:04:08 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:08.668 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:04:08 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:08.669 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:04:08 compute-0 nova_compute[189296]: 2025-11-28 18:04:08.670 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:11 compute-0 nova_compute[189296]: 2025-11-28 18:04:11.487 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:11 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:11.670 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:04:13 compute-0 nova_compute[189296]: 2025-11-28 18:04:13.149 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:14 compute-0 nova_compute[189296]: 2025-11-28 18:04:14.889 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "738e5649-3e79-434b-9fbe-4aff6d71b051" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:14 compute-0 nova_compute[189296]: 2025-11-28 18:04:14.890 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:14 compute-0 nova_compute[189296]: 2025-11-28 18:04:14.917 189300 DEBUG nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:04:15 compute-0 podman[242484]: 2025-11-28 18:04:15.002015727 +0000 UTC m=+0.062549977 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 28 18:04:15 compute-0 podman[242485]: 2025-11-28 18:04:15.003854391 +0000 UTC m=+0.060467825 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 28 18:04:15 compute-0 podman[242486]: 2025-11-28 18:04:15.016065669 +0000 UTC m=+0.067974909 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.145 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.146 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.153 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.154 189300 INFO nova.compute.claims [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.311 189300 DEBUG nova.compute.provider_tree [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.335 189300 DEBUG nova.scheduler.client.report [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.352 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.353 189300 DEBUG nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.388 189300 DEBUG nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.389 189300 DEBUG nova.network.neutron [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.412 189300 INFO nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.452 189300 DEBUG nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.545 189300 DEBUG nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.546 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.547 189300 INFO nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Creating image(s)#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.547 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.548 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.549 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.563 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.619 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.620 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.620 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.630 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.688 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.689 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598,backing_fmt=raw /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.728 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598,backing_fmt=raw /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.729 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f8e1ccb00af4752d8a5c7b44d7152dd9458fb598" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.729 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.784 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.786 189300 DEBUG nova.virt.disk.api [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Checking if we can resize image /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.786 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.848 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.849 189300 DEBUG nova.virt.disk.api [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Cannot resize image /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.850 189300 DEBUG nova.objects.instance [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'migration_context' on Instance uuid 738e5649-3e79-434b-9fbe-4aff6d71b051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.863 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.864 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.864 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.877 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.933 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.934 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.935 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:15 compute-0 nova_compute[189296]: 2025-11-28 18:04:15.947 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.004 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.005 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.040 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 1073741824" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.041 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.042 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.102 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.104 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.104 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Ensure instance console log exists: /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.105 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.106 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.106 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.489 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.845 189300 DEBUG nova.network.neutron [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Successfully updated port: d9985197-6aa0-4811-a620-ee1b4aa74e74 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.882 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.882 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquired lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.883 189300 DEBUG nova.network.neutron [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.974 189300 DEBUG nova.compute.manager [req-0e5f40af-3b63-4f87-bff6-9afa11bf8edb req-de8d9338-5f5f-4049-b582-2366542e6c8c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Received event network-changed-d9985197-6aa0-4811-a620-ee1b4aa74e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.975 189300 DEBUG nova.compute.manager [req-0e5f40af-3b63-4f87-bff6-9afa11bf8edb req-de8d9338-5f5f-4049-b582-2366542e6c8c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Refreshing instance network info cache due to event network-changed-d9985197-6aa0-4811-a620-ee1b4aa74e74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:04:16 compute-0 nova_compute[189296]: 2025-11-28 18:04:16.975 189300 DEBUG oslo_concurrency.lockutils [req-0e5f40af-3b63-4f87-bff6-9afa11bf8edb req-de8d9338-5f5f-4049-b582-2366542e6c8c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:04:17 compute-0 nova_compute[189296]: 2025-11-28 18:04:17.041 189300 DEBUG nova.network.neutron [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.039 189300 DEBUG nova.network.neutron [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Updating instance_info_cache with network_info: [{"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.084 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Releasing lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.085 189300 DEBUG nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Instance network_info: |[{"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.085 189300 DEBUG oslo_concurrency.lockutils [req-0e5f40af-3b63-4f87-bff6-9afa11bf8edb req-de8d9338-5f5f-4049-b582-2366542e6c8c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.085 189300 DEBUG nova.network.neutron [req-0e5f40af-3b63-4f87-bff6-9afa11bf8edb req-de8d9338-5f5f-4049-b582-2366542e6c8c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Refreshing network info cache for port d9985197-6aa0-4811-a620-ee1b4aa74e74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.088 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Start _get_guest_xml network_info=[{"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T17:54:35Z,direct_url=<?>,disk_format='qcow2',id=f54c2688-82d2-4cd3-8c3b-96e774162948,min_disk=0,min_ram=0,name='cirros',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T17:54:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}], 'ephemerals': [{'device_type': 'disk', 'guest_format': None, 'size': 1, 'encryption_options': None, 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.096 189300 WARNING nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.103 189300 DEBUG nova.virt.libvirt.host [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.104 189300 DEBUG nova.virt.libvirt.host [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.108 189300 DEBUG nova.virt.libvirt.host [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.109 189300 DEBUG nova.virt.libvirt.host [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.109 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.110 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T17:54:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='e125fa74-9e9f-47dc-8c8e-699980f99f10',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T17:54:35Z,direct_url=<?>,disk_format='qcow2',id=f54c2688-82d2-4cd3-8c3b-96e774162948,min_disk=0,min_ram=0,name='cirros',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T17:54:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.110 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.111 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.111 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.111 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.111 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.112 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.112 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.112 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.113 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.113 189300 DEBUG nova.virt.hardware [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.116 189300 DEBUG nova.virt.libvirt.vif [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:04:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu',id=5,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-al0gs0f7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:04:15Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2NzMwMDA4NzA2MTE1MDI4MjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDY3MzAwMDg3MDYxMTUwMjgyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2NzMwMDA4NzA2MTE1MDI4MjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 28 18:04:18 compute-0 nova_compute[189296]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDY3MzAwMDg3MDYxMTUwMjgyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA2NzMwMDA4NzA2MTE1MDI4MjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=738e5649-3e79-434b-9fbe-4aff6d71b051,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.117 189300 DEBUG nova.network.os_vif_util [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.117 189300 DEBUG nova.network.os_vif_util [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:e2:d6,bridge_name='br-int',has_traffic_filtering=True,id=d9985197-6aa0-4811-a620-ee1b4aa74e74,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd9985197-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.118 189300 DEBUG nova.objects.instance [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 738e5649-3e79-434b-9fbe-4aff6d71b051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.133 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <uuid>738e5649-3e79-434b-9fbe-4aff6d71b051</uuid>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <name>instance-00000005</name>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <memory>524288</memory>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <nova:name>vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu</nova:name>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:04:18</nova:creationTime>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <nova:flavor name="m1.small">
Nov 28 18:04:18 compute-0 nova_compute[189296]:        <nova:memory>512</nova:memory>
Nov 28 18:04:18 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:04:18 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:04:18 compute-0 nova_compute[189296]:        <nova:ephemeral>1</nova:ephemeral>
Nov 28 18:04:18 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:04:18 compute-0 nova_compute[189296]:        <nova:user uuid="6a35450c34a344b1a4e63aae1be2b971">admin</nova:user>
Nov 28 18:04:18 compute-0 nova_compute[189296]:        <nova:project uuid="79ee04b003ca4eb8a045699c7852a8b0">admin</nova:project>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="f54c2688-82d2-4cd3-8c3b-96e774162948"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:04:18 compute-0 nova_compute[189296]:        <nova:port uuid="d9985197-6aa0-4811-a620-ee1b4aa74e74">
Nov 28 18:04:18 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="192.168.0.35" ipVersion="4"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <system>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <entry name="serial">738e5649-3e79-434b-9fbe-4aff6d71b051</entry>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <entry name="uuid">738e5649-3e79-434b-9fbe-4aff6d71b051</entry>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    </system>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <os>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  </os>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <features>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  </features>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <target dev="vdb" bus="virtio"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.config"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:5c:e2:d6"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <target dev="tapd9985197-6a"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/console.log" append="off"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <video>
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    </video>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:04:18 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:04:18 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:04:18 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:04:18 compute-0 nova_compute[189296]: </domain>
Nov 28 18:04:18 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.134 189300 DEBUG nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Preparing to wait for external event network-vif-plugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.135 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.135 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.135 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.136 189300 DEBUG nova.virt.libvirt.vif [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:04:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu',id=5,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-al0gs0f7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:04:15Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2NzMwMDA4NzA2MTE1MDI4MjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDY3MzAwMDg3MDYxMTUwMjgyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2NzMwMDA4NzA2MTE1MDI4MjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 28 18:04:18 compute-0 nova_compute[189296]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDY3MzAwMDg3MDYxMTUwMjgyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA2NzMwMDA4NzA2MTE1MDI4MjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=738e5649-3e79-434b-9fbe-4aff6d71b051,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.136 189300 DEBUG nova.network.os_vif_util [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.137 189300 DEBUG nova.network.os_vif_util [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:e2:d6,bridge_name='br-int',has_traffic_filtering=True,id=d9985197-6aa0-4811-a620-ee1b4aa74e74,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd9985197-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.137 189300 DEBUG os_vif [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:e2:d6,bridge_name='br-int',has_traffic_filtering=True,id=d9985197-6aa0-4811-a620-ee1b4aa74e74,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd9985197-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.138 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.139 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.139 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.141 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.142 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9985197-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.142 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9985197-6a, col_values=(('external_ids', {'iface-id': 'd9985197-6aa0-4811-a620-ee1b4aa74e74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:e2:d6', 'vm-uuid': '738e5649-3e79-434b-9fbe-4aff6d71b051'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.144 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:18 compute-0 NetworkManager[56307]: <info>  [1764353058.1456] manager: (tapd9985197-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.149 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.151 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.153 189300 INFO os_vif [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:e2:d6,bridge_name='br-int',has_traffic_filtering=True,id=d9985197-6aa0-4811-a620-ee1b4aa74e74,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd9985197-6a')#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.203 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.203 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.203 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.204 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No VIF found with MAC fa:16:3e:5c:e2:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.204 189300 INFO nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Using config drive#033[00m
Nov 28 18:04:18 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 18:04:18.116 189300 DEBUG nova.virt.libvirt.vif [None req-7c39e31c-7a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 18:04:18 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 18:04:18.136 189300 DEBUG nova.virt.libvirt.vif [None req-7c39e31c-7a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.705 189300 INFO nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Creating config drive at /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.config#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.710 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6m08mgb0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.836 189300 DEBUG oslo_concurrency.processutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6m08mgb0" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:04:18 compute-0 kernel: tapd9985197-6a: entered promiscuous mode
Nov 28 18:04:18 compute-0 NetworkManager[56307]: <info>  [1764353058.9160] manager: (tapd9985197-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 28 18:04:18 compute-0 ovn_controller[97771]: 2025-11-28T18:04:18Z|00052|binding|INFO|Claiming lport d9985197-6aa0-4811-a620-ee1b4aa74e74 for this chassis.
Nov 28 18:04:18 compute-0 ovn_controller[97771]: 2025-11-28T18:04:18Z|00053|binding|INFO|d9985197-6aa0-4811-a620-ee1b4aa74e74: Claiming fa:16:3e:5c:e2:d6 192.168.0.35
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.917 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:18.925 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:e2:d6 192.168.0.35'], port_security=['fa:16:3e:5c:e2:d6 192.168.0.35'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-po7lv7knpyto-cwp5r5rzhumi-q43femobqz35-port-uyqu37nujs2e', 'neutron:cidrs': '192.168.0.35/24', 'neutron:device_id': '738e5649-3e79-434b-9fbe-4aff6d71b051', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-po7lv7knpyto-cwp5r5rzhumi-q43femobqz35-port-uyqu37nujs2e', 'neutron:project_id': '79ee04b003ca4eb8a045699c7852a8b0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a309e23b-efb6-4377-8050-5a658324ee07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37710b57-0bdd-4c1a-aa8d-366aa83fbf51, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=d9985197-6aa0-4811-a620-ee1b4aa74e74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:04:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:18.926 106624 INFO neutron.agent.ovn.metadata.agent [-] Port d9985197-6aa0-4811-a620-ee1b4aa74e74 in datapath 5cc11a5f-7338-49fd-ba02-2db7ff676c4f bound to our chassis#033[00m
Nov 28 18:04:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:18.927 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc11a5f-7338-49fd-ba02-2db7ff676c4f#033[00m
Nov 28 18:04:18 compute-0 ovn_controller[97771]: 2025-11-28T18:04:18Z|00054|binding|INFO|Setting lport d9985197-6aa0-4811-a620-ee1b4aa74e74 ovn-installed in OVS
Nov 28 18:04:18 compute-0 ovn_controller[97771]: 2025-11-28T18:04:18Z|00055|binding|INFO|Setting lport d9985197-6aa0-4811-a620-ee1b4aa74e74 up in Southbound
Nov 28 18:04:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:18.946 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[bc8f2081-352c-4838-afe0-59ab8dfdca9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.946 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:18 compute-0 nova_compute[189296]: 2025-11-28 18:04:18.955 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:18 compute-0 systemd-machined[155703]: New machine qemu-5-instance-00000005.
Nov 28 18:04:18 compute-0 systemd-udevd[242595]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:04:18 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 28 18:04:18 compute-0 NetworkManager[56307]: <info>  [1764353058.9811] device (tapd9985197-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:04:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:18.980 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[90441fb2-74fd-43df-be1d-869c1ca24e24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:04:18 compute-0 NetworkManager[56307]: <info>  [1764353058.9823] device (tapd9985197-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:04:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:18.984 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[3428308b-8531-498c-b554-7b710ef8d521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:04:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:19.011 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2855fa-edc7-4e10-804d-dfe7e9e009cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:04:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:19.028 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[ce94cc9f-de67-4bbc-9297-a4201918f63d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc11a5f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:38:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370971, 'reachable_time': 41615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242600, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:04:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:19.044 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d079f909-c9f7-4116-83ad-1362a22f910d]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370983, 'tstamp': 370983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242606, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370986, 'tstamp': 370986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242606, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:04:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:19.046 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc11a5f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.048 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.049 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:19.049 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc11a5f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:04:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:19.050 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:04:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:19.050 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc11a5f-70, col_values=(('external_ids', {'iface-id': '467e3797-177d-4174-b963-0efbd15595b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:04:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:19.050 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.220 189300 DEBUG nova.compute.manager [req-c496c616-bd73-40cb-8e92-ee9f8fbdf03c req-3a7aa476-4460-4743-8c9e-7ea887916eb5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Received event network-vif-plugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.220 189300 DEBUG oslo_concurrency.lockutils [req-c496c616-bd73-40cb-8e92-ee9f8fbdf03c req-3a7aa476-4460-4743-8c9e-7ea887916eb5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.221 189300 DEBUG oslo_concurrency.lockutils [req-c496c616-bd73-40cb-8e92-ee9f8fbdf03c req-3a7aa476-4460-4743-8c9e-7ea887916eb5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.221 189300 DEBUG oslo_concurrency.lockutils [req-c496c616-bd73-40cb-8e92-ee9f8fbdf03c req-3a7aa476-4460-4743-8c9e-7ea887916eb5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.221 189300 DEBUG nova.compute.manager [req-c496c616-bd73-40cb-8e92-ee9f8fbdf03c req-3a7aa476-4460-4743-8c9e-7ea887916eb5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Processing event network-vif-plugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.440 189300 DEBUG nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.441 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353059.438956, 738e5649-3e79-434b-9fbe-4aff6d71b051 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.441 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] VM Started (Lifecycle Event)#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.454 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.460 189300 INFO nova.virt.libvirt.driver [-] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Instance spawned successfully.#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.461 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.476 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.489 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.499 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.500 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.501 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.501 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.502 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.503 189300 DEBUG nova.virt.libvirt.driver [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.513 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.514 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353059.4394906, 738e5649-3e79-434b-9fbe-4aff6d71b051 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.514 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.541 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.547 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353059.448066, 738e5649-3e79-434b-9fbe-4aff6d71b051 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.547 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.549 189300 DEBUG nova.network.neutron [req-0e5f40af-3b63-4f87-bff6-9afa11bf8edb req-de8d9338-5f5f-4049-b582-2366542e6c8c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Updated VIF entry in instance network info cache for port d9985197-6aa0-4811-a620-ee1b4aa74e74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.550 189300 DEBUG nova.network.neutron [req-0e5f40af-3b63-4f87-bff6-9afa11bf8edb req-de8d9338-5f5f-4049-b582-2366542e6c8c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Updating instance_info_cache with network_info: [{"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.583 189300 INFO nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Took 4.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.583 189300 DEBUG nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.584 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.594 189300 DEBUG oslo_concurrency.lockutils [req-0e5f40af-3b63-4f87-bff6-9afa11bf8edb req-de8d9338-5f5f-4049-b582-2366542e6c8c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.595 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.624 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.654 189300 INFO nova.compute.manager [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Took 4.60 seconds to build instance.#033[00m
Nov 28 18:04:19 compute-0 nova_compute[189296]: 2025-11-28 18:04:19.671 189300 DEBUG oslo_concurrency.lockutils [None req-7c39e31c-7a3b-40ad-b900-c052e9e9d5cd 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:21 compute-0 podman[242616]: 2025-11-28 18:04:21.05145087 +0000 UTC m=+0.099434516 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 18:04:21 compute-0 podman[242617]: 2025-11-28 18:04:21.052797093 +0000 UTC m=+0.097143430 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Nov 28 18:04:21 compute-0 nova_compute[189296]: 2025-11-28 18:04:21.297 189300 DEBUG nova.compute.manager [req-1d1e8536-e9e5-4a54-ba01-7cd9dc9eff63 req-aedc8811-aef6-41ca-9422-5ef2db1b711e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Received event network-vif-plugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:04:21 compute-0 nova_compute[189296]: 2025-11-28 18:04:21.298 189300 DEBUG oslo_concurrency.lockutils [req-1d1e8536-e9e5-4a54-ba01-7cd9dc9eff63 req-aedc8811-aef6-41ca-9422-5ef2db1b711e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:21 compute-0 nova_compute[189296]: 2025-11-28 18:04:21.298 189300 DEBUG oslo_concurrency.lockutils [req-1d1e8536-e9e5-4a54-ba01-7cd9dc9eff63 req-aedc8811-aef6-41ca-9422-5ef2db1b711e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:21 compute-0 nova_compute[189296]: 2025-11-28 18:04:21.298 189300 DEBUG oslo_concurrency.lockutils [req-1d1e8536-e9e5-4a54-ba01-7cd9dc9eff63 req-aedc8811-aef6-41ca-9422-5ef2db1b711e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:21 compute-0 nova_compute[189296]: 2025-11-28 18:04:21.298 189300 DEBUG nova.compute.manager [req-1d1e8536-e9e5-4a54-ba01-7cd9dc9eff63 req-aedc8811-aef6-41ca-9422-5ef2db1b711e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] No waiting events found dispatching network-vif-plugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:04:21 compute-0 nova_compute[189296]: 2025-11-28 18:04:21.298 189300 WARNING nova.compute.manager [req-1d1e8536-e9e5-4a54-ba01-7cd9dc9eff63 req-aedc8811-aef6-41ca-9422-5ef2db1b711e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Received unexpected event network-vif-plugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 for instance with vm_state active and task_state None.#033[00m
Nov 28 18:04:23 compute-0 podman[242655]: 2025-11-28 18:04:22.999768013 +0000 UTC m=+0.062417333 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:04:23 compute-0 podman[242656]: 2025-11-28 18:04:23.030324949 +0000 UTC m=+0.082402071 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=base rhel9, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 28 18:04:23 compute-0 nova_compute[189296]: 2025-11-28 18:04:23.144 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:23 compute-0 nova_compute[189296]: 2025-11-28 18:04:23.155 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:26 compute-0 podman[242697]: 2025-11-28 18:04:26.040539588 +0000 UTC m=+0.103694290 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:04:28 compute-0 nova_compute[189296]: 2025-11-28 18:04:28.148 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:28 compute-0 nova_compute[189296]: 2025-11-28 18:04:28.157 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:29 compute-0 podman[203494]: time="2025-11-28T18:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:04:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:04:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Nov 28 18:04:31 compute-0 openstack_network_exporter[205632]: ERROR   18:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:04:31 compute-0 openstack_network_exporter[205632]: ERROR   18:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:04:31 compute-0 openstack_network_exporter[205632]: ERROR   18:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:04:31 compute-0 openstack_network_exporter[205632]: ERROR   18:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:04:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:04:31 compute-0 openstack_network_exporter[205632]: ERROR   18:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:04:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:04:33 compute-0 nova_compute[189296]: 2025-11-28 18:04:33.152 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:33 compute-0 nova_compute[189296]: 2025-11-28 18:04:33.159 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:36 compute-0 podman[242722]: 2025-11-28 18:04:36.998137661 +0000 UTC m=+0.054120069 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:04:38 compute-0 nova_compute[189296]: 2025-11-28 18:04:38.157 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:38 compute-0 nova_compute[189296]: 2025-11-28 18:04:38.164 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:43 compute-0 nova_compute[189296]: 2025-11-28 18:04:43.160 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:43 compute-0 nova_compute[189296]: 2025-11-28 18:04:43.163 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:46 compute-0 podman[242745]: 2025-11-28 18:04:46.030268216 +0000 UTC m=+0.083641848 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7)
Nov 28 18:04:46 compute-0 podman[242746]: 2025-11-28 18:04:46.036900807 +0000 UTC m=+0.085858421 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 28 18:04:46 compute-0 podman[242747]: 2025-11-28 18:04:46.048068429 +0000 UTC m=+0.094467331 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:04:48 compute-0 nova_compute[189296]: 2025-11-28 18:04:48.164 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:48 compute-0 ovn_controller[97771]: 2025-11-28T18:04:48Z|00056|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.978 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.979 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.985 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf', 'name': 'vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.987 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 738e5649-3e79-434b-9fbe-4aff6d71b051 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 18:04:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:51.988 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/738e5649-3e79-434b-9fbe-4aff6d71b051 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 18:04:52 compute-0 podman[242804]: 2025-11-28 18:04:52.025214618 +0000 UTC m=+0.082510900 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 28 18:04:52 compute-0 podman[242807]: 2025-11-28 18:04:52.041159547 +0000 UTC m=+0.087170425 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:04:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:52.609 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:04:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:52.610 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:04:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:04:52.610 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:04:52 compute-0 ovn_controller[97771]: 2025-11-28T18:04:52Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:e2:d6 192.168.0.35
Nov 28 18:04:52 compute-0 ovn_controller[97771]: 2025-11-28T18:04:52Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:e2:d6 192.168.0.35
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.694 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Fri, 28 Nov 2025 18:04:52 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3d3f1818-e490-47d8-a935-8b0ddc3963f1 x-openstack-request-id: req-3d3f1818-e490-47d8-a935-8b0ddc3963f1 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.695 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "738e5649-3e79-434b-9fbe-4aff6d71b051", "name": "vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu", "status": "ACTIVE", "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "user_id": "6a35450c34a344b1a4e63aae1be2b971", "metadata": {"metering.server_group": "ac6a0a76-f006-4c50-a4a8-904a1f128161"}, "hostId": "db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651", "image": {"id": "f54c2688-82d2-4cd3-8c3b-96e774162948", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/f54c2688-82d2-4cd3-8c3b-96e774162948"}]}, "flavor": {"id": "e125fa74-9e9f-47dc-8c8e-699980f99f10", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/e125fa74-9e9f-47dc-8c8e-699980f99f10"}]}, "created": "2025-11-28T18:04:13Z", "updated": "2025-11-28T18:04:19Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.35", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5c:e2:d6"}, {"version": 4, "addr": "192.168.122.208", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5c:e2:d6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/738e5649-3e79-434b-9fbe-4aff6d71b051"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/738e5649-3e79-434b-9fbe-4aff6d71b051"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-28T18:04:19.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.695 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/738e5649-3e79-434b-9fbe-4aff6d71b051 used request id req-3d3f1818-e490-47d8-a935-8b0ddc3963f1 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.696 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '738e5649-3e79-434b-9fbe-4aff6d71b051', 'name': 'vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.700 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d10f9fc-89ea-4059-8532-7e0aec0791d6', 'name': 'test_0', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.702 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3e7aebb1-2fd3-449c-be21-02c4d1b57717', 'name': 'vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.702 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.702 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.702 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.703 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:04:52.702991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.727 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.728 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.728 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.748 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.749 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.749 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.772 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.773 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.773 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.799 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.800 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.800 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.801 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.801 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.801 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.801 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.801 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.802 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:04:52.802040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.859 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.860 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.860 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.919 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 21199872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.919 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 2160128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.920 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 328014 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.986 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.986 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:52.986 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.043 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.043 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.043 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.044 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.044 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.044 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.045 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.045 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.045 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:04:53.045264) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.045 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.latency volume: 301308176 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.046 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.latency volume: 58590956 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.046 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.latency volume: 53252991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.046 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 320276091 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.047 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 46522129 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.047 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 56081893 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.047 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 284678818 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.047 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 69824352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.048 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 37055244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.048 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 321385299 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.048 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 64866438 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.048 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.latency volume: 53024748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.049 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.049 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.049 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.049 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.050 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.050 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:04:53.050199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.054 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.058 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 738e5649-3e79-434b-9fbe-4aff6d71b051 / tapd9985197-6a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.058 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.061 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.063 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.064 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.064 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.064 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.064 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.064 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.065 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:04:53.064964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.084 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.106 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/memory.usage volume: 33.30078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.126 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.149 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.149 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.150 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.150 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.150 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.150 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.150 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.150 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.150 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.151 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:04:53.150597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.151 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 19791872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.152 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.152 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.152 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.152 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.153 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.153 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.153 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.153 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.154 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.154 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.154 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.154 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.154 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.154 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.155 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.155 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:04:53.154879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.156 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.156 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 17543168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.156 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.156 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.156 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.157 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.157 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.157 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.157 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.158 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.158 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.158 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.158 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.159 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.159 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.159 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.159 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.159 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:04:53.159087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.160 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.160 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.161 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.161 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.161 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.161 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:04:53.161482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.161 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.latency volume: 402835350 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.162 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.latency volume: 7108483 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.162 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.162 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 652490893 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.162 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 7967925 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.162 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.163 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 646402207 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.163 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 6041958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.163 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.163 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 995977377 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.163 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 9215217 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.164 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.164 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.164 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.164 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.164 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.165 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.165 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.165 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.165 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.165 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:04:53.165073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.166 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.166 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.166 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.166 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.167 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 nova_compute[189296]: 2025-11-28 18:04:53.167 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.167 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.167 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.167 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.168 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 nova_compute[189296]: 2025-11-28 18:04:53.168 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:53 compute-0 nova_compute[189296]: 2025-11-28 18:04:53.168 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.168 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.168 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.168 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:04:53 compute-0 nova_compute[189296]: 2025-11-28 18:04:53.168 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.168 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.168 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.169 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 nova_compute[189296]: 2025-11-28 18:04:53.169 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:04:53.168988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.169 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.bytes.delta volume: 1564 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.169 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.169 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.170 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.170 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:04:53 compute-0 nova_compute[189296]: 2025-11-28 18:04:53.170 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.170 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.170 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.171 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.171 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.171 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:04:53.171148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.171 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.171 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.172 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.172 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.172 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.172 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.173 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.173 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.173 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.173 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:04:53.173398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.173 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/cpu volume: 33310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.173 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/cpu volume: 32530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.174 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/cpu volume: 37000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.174 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/cpu volume: 260690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.174 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.174 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.174 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.175 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.175 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.175 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.175 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-28T18:04:53.175190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.175 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu>]
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.176 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.176 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.176 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.176 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.176 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.177 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.177 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:04:53.176555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.177 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.177 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.178 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.178 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.178 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:04:53.178099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.178 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.178 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.179 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets volume: 57 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.179 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.179 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.179 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.179 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.180 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.180 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.180 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.181 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:04:53.180269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.181 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.181 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.181 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.181 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.181 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.182 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:04:53.181769) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.182 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.182 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.183 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.183 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.183 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.183 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.183 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.183 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:04:53.183730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.184 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.184 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.bytes volume: 1077 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.184 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.184 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.bytes volume: 7592 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.185 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.185 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.185 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.185 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.185 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.185 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.185 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:04:53.185657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.186 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.186 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.186 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 20258816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.187 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.187 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.187 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.187 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.188 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.188 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.188 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.188 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.189 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.189 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.189 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.190 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.190 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.190 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:04:53.190304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.190 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.bytes.delta volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.191 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.191 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.191 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.192 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.192 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.192 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.192 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.192 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.192 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-28T18:04:53.192795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.193 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.193 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu>]
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.193 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.193 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.193 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.193 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.194 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.194 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.194 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.bytes volume: 1388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:04:53.194052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.194 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.195 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.incoming.bytes volume: 8490 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.195 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.195 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.195 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.195 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.196 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.196 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:04:53.196031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.196 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets volume: 6 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.196 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.197 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets volume: 66 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.197 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.197 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.197 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.197 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.198 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.198 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:04:53.197986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.198 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.199 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.199 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.199 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.200 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.200 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.200 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.200 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:04:53.200331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.200 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.200 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.201 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.201 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 719 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.201 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 114 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.201 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.201 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.202 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.202 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.202 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.202 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.203 15 DEBUG ceilometer.compute.pollsters [-] 3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.203 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:04:53.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:04:54 compute-0 podman[242850]: 2025-11-28 18:04:54.046570874 +0000 UTC m=+0.111235920 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:04:54 compute-0 podman[242851]: 2025-11-28 18:04:54.087170792 +0000 UTC m=+0.139872977 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Nov 28 18:04:57 compute-0 podman[242892]: 2025-11-28 18:04:57.053927521 +0000 UTC m=+0.112867710 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 28 18:04:58 compute-0 nova_compute[189296]: 2025-11-28 18:04:58.169 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:58 compute-0 nova_compute[189296]: 2025-11-28 18:04:58.171 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:04:59 compute-0 podman[203494]: time="2025-11-28T18:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:04:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:04:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4773 "" "Go-http-client/1.1"
Nov 28 18:05:00 compute-0 nova_compute[189296]: 2025-11-28 18:05:00.653 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:05:01 compute-0 openstack_network_exporter[205632]: ERROR   18:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:05:01 compute-0 openstack_network_exporter[205632]: ERROR   18:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:05:01 compute-0 openstack_network_exporter[205632]: ERROR   18:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:05:01 compute-0 openstack_network_exporter[205632]: ERROR   18:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:05:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:05:01 compute-0 openstack_network_exporter[205632]: ERROR   18:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:05:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:05:02 compute-0 nova_compute[189296]: 2025-11-28 18:05:02.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:05:02 compute-0 nova_compute[189296]: 2025-11-28 18:05:02.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:05:03 compute-0 nova_compute[189296]: 2025-11-28 18:05:03.170 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:03 compute-0 nova_compute[189296]: 2025-11-28 18:05:03.172 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:04 compute-0 nova_compute[189296]: 2025-11-28 18:05:04.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:05:04 compute-0 nova_compute[189296]: 2025-11-28 18:05:04.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:05:05 compute-0 nova_compute[189296]: 2025-11-28 18:05:05.541 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:05:05 compute-0 nova_compute[189296]: 2025-11-28 18:05:05.542 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:05:05 compute-0 nova_compute[189296]: 2025-11-28 18:05:05.542 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.565 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updating instance_info_cache with network_info: [{"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.581 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.581 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.582 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.583 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.583 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.657 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.658 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.659 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.659 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.774 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:07 compute-0 podman[242919]: 2025-11-28 18:05:07.782980759 +0000 UTC m=+0.060793012 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.837 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.839 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.899 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.901 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.967 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:07 compute-0 nova_compute[189296]: 2025-11-28 18:05:07.968 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.030 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.039 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.101 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.102 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.172 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.176 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.176 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.267 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.268 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.327 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.333 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.396 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.398 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.462 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.463 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.527 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.529 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.590 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.598 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.663 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.665 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.721 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.722 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.786 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.788 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:05:08 compute-0 nova_compute[189296]: 2025-11-28 18:05:08.847 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.192 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.194 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4587MB free_disk=72.3183364868164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.195 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.195 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.320 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.321 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 3e7aebb1-2fd3-449c-be21-02c4d1b57717 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.322 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.322 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 738e5649-3e79-434b-9fbe-4aff6d71b051 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.323 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.323 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.422 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.969 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.991 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:05:09 compute-0 nova_compute[189296]: 2025-11-28 18:05:09.991 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:05:12 compute-0 nova_compute[189296]: 2025-11-28 18:05:12.991 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:05:13 compute-0 nova_compute[189296]: 2025-11-28 18:05:13.175 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:17 compute-0 podman[242993]: 2025-11-28 18:05:17.017561101 +0000 UTC m=+0.076809072 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 28 18:05:17 compute-0 podman[242991]: 2025-11-28 18:05:17.021566799 +0000 UTC m=+0.087570384 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6)
Nov 28 18:05:17 compute-0 podman[242992]: 2025-11-28 18:05:17.053942847 +0000 UTC m=+0.115221127 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 28 18:05:18 compute-0 nova_compute[189296]: 2025-11-28 18:05:18.176 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:18 compute-0 nova_compute[189296]: 2025-11-28 18:05:18.178 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:23 compute-0 podman[243049]: 2025-11-28 18:05:23.007201646 +0000 UTC m=+0.066926321 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 28 18:05:23 compute-0 podman[243050]: 2025-11-28 18:05:23.037348981 +0000 UTC m=+0.097036505 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:05:23 compute-0 nova_compute[189296]: 2025-11-28 18:05:23.178 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:23 compute-0 nova_compute[189296]: 2025-11-28 18:05:23.180 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:24 compute-0 podman[243085]: 2025-11-28 18:05:24.990472547 +0000 UTC m=+0.054783236 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 18:05:25 compute-0 podman[243086]: 2025-11-28 18:05:25.049989105 +0000 UTC m=+0.105499369 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public)
Nov 28 18:05:28 compute-0 podman[243128]: 2025-11-28 18:05:28.037518721 +0000 UTC m=+0.086583999 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:05:28 compute-0 nova_compute[189296]: 2025-11-28 18:05:28.180 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:29 compute-0 podman[203494]: time="2025-11-28T18:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:05:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:05:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4777 "" "Go-http-client/1.1"
Nov 28 18:05:31 compute-0 openstack_network_exporter[205632]: ERROR   18:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:05:31 compute-0 openstack_network_exporter[205632]: ERROR   18:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:05:31 compute-0 openstack_network_exporter[205632]: ERROR   18:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:05:31 compute-0 openstack_network_exporter[205632]: ERROR   18:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:05:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:05:31 compute-0 openstack_network_exporter[205632]: ERROR   18:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:05:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:05:33 compute-0 nova_compute[189296]: 2025-11-28 18:05:33.182 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:38 compute-0 podman[243154]: 2025-11-28 18:05:38.002766973 +0000 UTC m=+0.059409737 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:05:38 compute-0 nova_compute[189296]: 2025-11-28 18:05:38.185 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:43 compute-0 nova_compute[189296]: 2025-11-28 18:05:43.187 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:05:48 compute-0 podman[243177]: 2025-11-28 18:05:48.051522348 +0000 UTC m=+0.100739984 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 28 18:05:48 compute-0 podman[243184]: 2025-11-28 18:05:48.068415229 +0000 UTC m=+0.103408349 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:05:48 compute-0 podman[243178]: 2025-11-28 18:05:48.095054798 +0000 UTC m=+0.124671867 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 28 18:05:48 compute-0 nova_compute[189296]: 2025-11-28 18:05:48.189 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:05:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:05:52.611 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:05:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:05:52.612 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:05:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:05:52.613 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:05:53 compute-0 nova_compute[189296]: 2025-11-28 18:05:53.191 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:05:53 compute-0 nova_compute[189296]: 2025-11-28 18:05:53.193 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:53 compute-0 nova_compute[189296]: 2025-11-28 18:05:53.193 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 28 18:05:53 compute-0 nova_compute[189296]: 2025-11-28 18:05:53.193 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 18:05:53 compute-0 nova_compute[189296]: 2025-11-28 18:05:53.194 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 28 18:05:53 compute-0 nova_compute[189296]: 2025-11-28 18:05:53.195 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:54 compute-0 podman[243231]: 2025-11-28 18:05:54.011697383 +0000 UTC m=+0.069775860 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 28 18:05:54 compute-0 podman[243232]: 2025-11-28 18:05:54.083915722 +0000 UTC m=+0.121925191 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:05:56 compute-0 podman[243271]: 2025-11-28 18:05:56.025256139 +0000 UTC m=+0.081604898 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:05:56 compute-0 podman[243272]: 2025-11-28 18:05:56.025638408 +0000 UTC m=+0.082838108 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, config_id=edpm, container_name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 18:05:58 compute-0 nova_compute[189296]: 2025-11-28 18:05:58.196 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:05:59 compute-0 podman[243314]: 2025-11-28 18:05:59.089094223 +0000 UTC m=+0.145208416 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 28 18:05:59 compute-0 podman[203494]: time="2025-11-28T18:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:05:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:05:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4770 "" "Go-http-client/1.1"
Nov 28 18:06:00 compute-0 nova_compute[189296]: 2025-11-28 18:06:00.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:06:01 compute-0 openstack_network_exporter[205632]: ERROR   18:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:06:01 compute-0 openstack_network_exporter[205632]: ERROR   18:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:06:01 compute-0 openstack_network_exporter[205632]: ERROR   18:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:06:01 compute-0 openstack_network_exporter[205632]: ERROR   18:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:06:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:06:01 compute-0 openstack_network_exporter[205632]: ERROR   18:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:06:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.198 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.768 189300 DEBUG oslo_concurrency.lockutils [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.769 189300 DEBUG oslo_concurrency.lockutils [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.770 189300 DEBUG oslo_concurrency.lockutils [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.770 189300 DEBUG oslo_concurrency.lockutils [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.770 189300 DEBUG oslo_concurrency.lockutils [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.772 189300 INFO nova.compute.manager [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Terminating instance#033[00m
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.774 189300 DEBUG nova.compute.manager [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:06:03 compute-0 kernel: tapb0754721-6c (unregistering): left promiscuous mode
Nov 28 18:06:03 compute-0 NetworkManager[56307]: <info>  [1764353163.8370] device (tapb0754721-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.845 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:03 compute-0 ovn_controller[97771]: 2025-11-28T18:06:03Z|00057|binding|INFO|Releasing lport b0754721-6c06-49b9-8437-3ed1125ed2c6 from this chassis (sb_readonly=0)
Nov 28 18:06:03 compute-0 ovn_controller[97771]: 2025-11-28T18:06:03Z|00058|binding|INFO|Setting lport b0754721-6c06-49b9-8437-3ed1125ed2c6 down in Southbound
Nov 28 18:06:03 compute-0 ovn_controller[97771]: 2025-11-28T18:06:03Z|00059|binding|INFO|Removing iface tapb0754721-6c ovn-installed in OVS
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.848 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:03.854 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:bc:ca 192.168.0.158'], port_security=['fa:16:3e:4f:bc:ca 192.168.0.158'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-po7lv7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-port-slpgfh5aovby', 'neutron:cidrs': '192.168.0.158/24', 'neutron:device_id': '3e7aebb1-2fd3-449c-be21-02c4d1b57717', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-po7lv7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-port-slpgfh5aovby', 'neutron:project_id': '79ee04b003ca4eb8a045699c7852a8b0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a309e23b-efb6-4377-8050-5a658324ee07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.194', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37710b57-0bdd-4c1a-aa8d-366aa83fbf51, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=b0754721-6c06-49b9-8437-3ed1125ed2c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:06:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:03.855 106624 INFO neutron.agent.ovn.metadata.agent [-] Port b0754721-6c06-49b9-8437-3ed1125ed2c6 in datapath 5cc11a5f-7338-49fd-ba02-2db7ff676c4f unbound from our chassis#033[00m
Nov 28 18:06:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:03.856 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc11a5f-7338-49fd-ba02-2db7ff676c4f#033[00m
Nov 28 18:06:03 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.861 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:03.875 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[88a93afc-917c-4477-b0e0-d55cb4456c53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:06:03 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 28 18:06:03 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 5min 12.578s CPU time.
Nov 28 18:06:03 compute-0 systemd-machined[155703]: Machine qemu-2-instance-00000002 terminated.
Nov 28 18:06:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:03.922 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[db2e28f1-96e3-48d9-8e7e-2c176ad52e36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:06:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:03.926 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[98bd7366-a572-4dcb-9d44-58ca4042d7e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:06:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:03.964 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4117aa-513c-4e15-9fe4-e18e6bbfe831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:06:03 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:03.983 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[db301513-a592-4b05-8e84-493680cc8907]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc11a5f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:38:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370971, 'reachable_time': 43855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243350, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:03.999 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:03.999 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a402a9-89a4-4503-aaa6-ccfe9f11158e]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370983, 'tstamp': 370983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243352, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370986, 'tstamp': 370986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243352, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:06:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:04.001 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc11a5f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.002 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.009 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:04.010 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc11a5f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:06:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:04.010 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:06:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:04.011 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc11a5f-70, col_values=(('external_ids', {'iface-id': '467e3797-177d-4174-b963-0efbd15595b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:06:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:04.011 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.052 189300 INFO nova.virt.libvirt.driver [-] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Instance destroyed successfully.#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.052 189300 DEBUG nova.objects.instance [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'resources' on Instance uuid 3e7aebb1-2fd3-449c-be21-02c4d1b57717 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.067 189300 DEBUG nova.virt.libvirt.vif [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T17:57:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-6e6fe7uhqqsg-35p6vulzyxtr-vnf-mf7ve6yw5m3s',id=2,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-28T17:57:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-i6lofcfj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T17:57:14Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2NTE5ODY4NDk5NTU1NzM0NzU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDY1MTk4Njg0OTk1NTU3MzQ3NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2NTE5ODY4NDk5NTU1NzM0NzU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 28 18:06:04 compute-0 nova_compute[189296]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDY1MTk4Njg0OTk1NTU3MzQ3NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA2NTE5ODY4NDk5NTU1NzM0NzU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNjUxOTg2ODQ5OTU1NTczNDc1PT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=3e7aebb1-2fd3-449c-be21-02c4d1b57717,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.068 189300 DEBUG nova.network.os_vif_util [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "address": "fa:16:3e:4f:bc:ca", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0754721-6c", "ovs_interfaceid": "b0754721-6c06-49b9-8437-3ed1125ed2c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.069 189300 DEBUG nova.network.os_vif_util [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:bc:ca,bridge_name='br-int',has_traffic_filtering=True,id=b0754721-6c06-49b9-8437-3ed1125ed2c6,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0754721-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.069 189300 DEBUG os_vif [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:bc:ca,bridge_name='br-int',has_traffic_filtering=True,id=b0754721-6c06-49b9-8437-3ed1125ed2c6,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0754721-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:06:04 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 18:06:04.067 189300 DEBUG nova.virt.libvirt.vif [None req-27bfc69e-ec [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.071 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.072 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0754721-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.073 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.075 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.078 189300 INFO os_vif [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:bc:ca,bridge_name='br-int',has_traffic_filtering=True,id=b0754721-6c06-49b9-8437-3ed1125ed2c6,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb0754721-6c')#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.078 189300 INFO nova.virt.libvirt.driver [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Deleting instance files /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717_del#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.079 189300 INFO nova.virt.libvirt.driver [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Deletion of /var/lib/nova/instances/3e7aebb1-2fd3-449c-be21-02c4d1b57717_del complete#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.132 189300 INFO nova.compute.manager [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.133 189300 DEBUG oslo.service.loopingcall [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.134 189300 DEBUG nova.compute.manager [-] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.134 189300 DEBUG nova.network.neutron [-] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:06:04 compute-0 nova_compute[189296]: 2025-11-28 18:06:04.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:06:05 compute-0 nova_compute[189296]: 2025-11-28 18:06:05.072 189300 DEBUG nova.compute.manager [req-42121f6d-2a28-4c11-97e6-0e23064e3bf7 req-3d0e8592-9601-48ec-aad2-17c2c3bb4b8f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Received event network-vif-unplugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:06:05 compute-0 nova_compute[189296]: 2025-11-28 18:06:05.072 189300 DEBUG oslo_concurrency.lockutils [req-42121f6d-2a28-4c11-97e6-0e23064e3bf7 req-3d0e8592-9601-48ec-aad2-17c2c3bb4b8f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:06:05 compute-0 nova_compute[189296]: 2025-11-28 18:06:05.072 189300 DEBUG oslo_concurrency.lockutils [req-42121f6d-2a28-4c11-97e6-0e23064e3bf7 req-3d0e8592-9601-48ec-aad2-17c2c3bb4b8f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:06:05 compute-0 nova_compute[189296]: 2025-11-28 18:06:05.072 189300 DEBUG oslo_concurrency.lockutils [req-42121f6d-2a28-4c11-97e6-0e23064e3bf7 req-3d0e8592-9601-48ec-aad2-17c2c3bb4b8f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:06:05 compute-0 nova_compute[189296]: 2025-11-28 18:06:05.073 189300 DEBUG nova.compute.manager [req-42121f6d-2a28-4c11-97e6-0e23064e3bf7 req-3d0e8592-9601-48ec-aad2-17c2c3bb4b8f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] No waiting events found dispatching network-vif-unplugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:06:05 compute-0 nova_compute[189296]: 2025-11-28 18:06:05.073 189300 DEBUG nova.compute.manager [req-42121f6d-2a28-4c11-97e6-0e23064e3bf7 req-3d0e8592-9601-48ec-aad2-17c2c3bb4b8f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Received event network-vif-unplugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:06:05 compute-0 nova_compute[189296]: 2025-11-28 18:06:05.537 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:06:05 compute-0 nova_compute[189296]: 2025-11-28 18:06:05.537 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:06:05 compute-0 nova_compute[189296]: 2025-11-28 18:06:05.538 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:06:06 compute-0 nova_compute[189296]: 2025-11-28 18:06:06.149 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:06.149 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:06:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:06.150 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.997 189300 DEBUG nova.compute.manager [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Received event network-vif-plugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.997 189300 DEBUG oslo_concurrency.lockutils [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.997 189300 DEBUG oslo_concurrency.lockutils [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.997 189300 DEBUG oslo_concurrency.lockutils [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.997 189300 DEBUG nova.compute.manager [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] No waiting events found dispatching network-vif-plugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.998 189300 WARNING nova.compute.manager [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Received unexpected event network-vif-plugged-b0754721-6c06-49b9-8437-3ed1125ed2c6 for instance with vm_state active and task_state deleting.#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.998 189300 DEBUG nova.compute.manager [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Received event network-changed-b0754721-6c06-49b9-8437-3ed1125ed2c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.998 189300 DEBUG nova.compute.manager [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Refreshing instance network info cache due to event network-changed-b0754721-6c06-49b9-8437-3ed1125ed2c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.998 189300 DEBUG oslo_concurrency.lockutils [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.998 189300 DEBUG oslo_concurrency.lockutils [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:06:07 compute-0 nova_compute[189296]: 2025-11-28 18:06:07.998 189300 DEBUG nova.network.neutron [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Refreshing network info cache for port b0754721-6c06-49b9-8437-3ed1125ed2c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.201 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.567 189300 INFO nova.network.neutron [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Port b0754721-6c06-49b9-8437-3ed1125ed2c6 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.567 189300 DEBUG nova.network.neutron [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.587 189300 DEBUG oslo_concurrency.lockutils [req-c7314e63-99e5-4559-8f86-7fcbdd547d97 req-6d74bbe3-6e55-415d-8ebb-2d8dbd59520d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-3e7aebb1-2fd3-449c-be21-02c4d1b57717" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.610 189300 DEBUG nova.network.neutron [-] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.623 189300 INFO nova.compute.manager [-] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Took 4.49 seconds to deallocate network for instance.#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.661 189300 DEBUG oslo_concurrency.lockutils [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.662 189300 DEBUG oslo_concurrency.lockutils [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.700 189300 DEBUG nova.scheduler.client.report [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Refreshing inventories for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.729 189300 DEBUG nova.scheduler.client.report [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Updating ProviderTree inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.730 189300 DEBUG nova.compute.provider_tree [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.753 189300 DEBUG nova.scheduler.client.report [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Refreshing aggregate associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.758 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Updating instance_info_cache with network_info: [{"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.863 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.864 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.864 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.864 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.864 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.865 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.865 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.865 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.868 189300 DEBUG nova.scheduler.client.report [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Refreshing trait associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, traits: HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.886 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.979 189300 DEBUG nova.compute.provider_tree [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:06:08 compute-0 nova_compute[189296]: 2025-11-28 18:06:08.993 189300 DEBUG nova.scheduler.client.report [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.018 189300 DEBUG oslo_concurrency.lockutils [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.020 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.020 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.021 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:06:09 compute-0 podman[243373]: 2025-11-28 18:06:09.042975989 +0000 UTC m=+0.086727142 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.060 189300 INFO nova.scheduler.client.report [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Deleted allocations for instance 3e7aebb1-2fd3-449c-be21-02c4d1b57717#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.074 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.151 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.173 189300 DEBUG oslo_concurrency.lockutils [None req-27bfc69e-ec5f-40c6-9f48-518134499abf 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "3e7aebb1-2fd3-449c-be21-02c4d1b57717" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.232 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.232 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.289 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.291 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.385 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.386 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.453 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.459 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.534 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.536 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.593 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.594 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.672 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.674 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.731 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.739 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.798 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.799 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.860 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.861 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.918 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.919 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:06:09 compute-0 nova_compute[189296]: 2025-11-28 18:06:09.976 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.309 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.310 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4730MB free_disk=72.34026718139648GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.311 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.311 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.399 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.400 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.400 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 738e5649-3e79-434b-9fbe-4aff6d71b051 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.400 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.400 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.476 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.497 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.518 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:06:10 compute-0 nova_compute[189296]: 2025-11-28 18:06:10.518 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:06:11 compute-0 nova_compute[189296]: 2025-11-28 18:06:11.279 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:06:11 compute-0 nova_compute[189296]: 2025-11-28 18:06:11.619 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:06:12 compute-0 nova_compute[189296]: 2025-11-28 18:06:12.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:06:13 compute-0 nova_compute[189296]: 2025-11-28 18:06:13.202 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:14 compute-0 nova_compute[189296]: 2025-11-28 18:06:14.076 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:15.153 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:06:18 compute-0 nova_compute[189296]: 2025-11-28 18:06:18.206 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:19 compute-0 podman[243436]: 2025-11-28 18:06:19.032193184 +0000 UTC m=+0.086200800 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 28 18:06:19 compute-0 nova_compute[189296]: 2025-11-28 18:06:19.049 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764353164.0486543, 3e7aebb1-2fd3-449c-be21-02c4d1b57717 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:06:19 compute-0 nova_compute[189296]: 2025-11-28 18:06:19.049 189300 INFO nova.compute.manager [-] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:06:19 compute-0 podman[243435]: 2025-11-28 18:06:19.072028464 +0000 UTC m=+0.121362106 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Nov 28 18:06:19 compute-0 nova_compute[189296]: 2025-11-28 18:06:19.077 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:19 compute-0 nova_compute[189296]: 2025-11-28 18:06:19.079 189300 DEBUG nova.compute.manager [None req-2554ccd1-7ae9-4acb-8612-ced79147cd0a - - - - - -] [instance: 3e7aebb1-2fd3-449c-be21-02c4d1b57717] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:06:19 compute-0 podman[243437]: 2025-11-28 18:06:19.106143786 +0000 UTC m=+0.137136161 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 28 18:06:21 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 28 18:06:23 compute-0 nova_compute[189296]: 2025-11-28 18:06:23.208 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:24 compute-0 nova_compute[189296]: 2025-11-28 18:06:24.079 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:25 compute-0 podman[243495]: 2025-11-28 18:06:25.035507462 +0000 UTC m=+0.084761996 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:06:25 compute-0 podman[243496]: 2025-11-28 18:06:25.053882769 +0000 UTC m=+0.094517233 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 28 18:06:26 compute-0 podman[243534]: 2025-11-28 18:06:26.994335995 +0000 UTC m=+0.060034573 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:06:27 compute-0 podman[243535]: 2025-11-28 18:06:27.032181106 +0000 UTC m=+0.092833312 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, config_id=edpm, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release-0.7.12=)
Nov 28 18:06:28 compute-0 nova_compute[189296]: 2025-11-28 18:06:28.210 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:29 compute-0 nova_compute[189296]: 2025-11-28 18:06:29.081 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:29 compute-0 podman[203494]: time="2025-11-28T18:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:06:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:06:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4778 "" "Go-http-client/1.1"
Nov 28 18:06:30 compute-0 podman[243575]: 2025-11-28 18:06:30.263592899 +0000 UTC m=+0.326567474 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:06:31 compute-0 openstack_network_exporter[205632]: ERROR   18:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:06:31 compute-0 openstack_network_exporter[205632]: ERROR   18:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:06:31 compute-0 openstack_network_exporter[205632]: ERROR   18:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:06:31 compute-0 openstack_network_exporter[205632]: ERROR   18:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:06:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:06:31 compute-0 openstack_network_exporter[205632]: ERROR   18:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:06:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:06:33 compute-0 nova_compute[189296]: 2025-11-28 18:06:33.212 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:34 compute-0 nova_compute[189296]: 2025-11-28 18:06:34.083 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:38 compute-0 nova_compute[189296]: 2025-11-28 18:06:38.215 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:39 compute-0 nova_compute[189296]: 2025-11-28 18:06:39.086 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:40 compute-0 podman[243603]: 2025-11-28 18:06:40.049446788 +0000 UTC m=+0.098725606 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:06:40 compute-0 ovn_controller[97771]: 2025-11-28T18:06:40Z|00060|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Nov 28 18:06:43 compute-0 nova_compute[189296]: 2025-11-28 18:06:43.216 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:44 compute-0 nova_compute[189296]: 2025-11-28 18:06:44.088 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:48 compute-0 nova_compute[189296]: 2025-11-28 18:06:48.219 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:49 compute-0 nova_compute[189296]: 2025-11-28 18:06:49.090 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:50 compute-0 podman[243627]: 2025-11-28 18:06:50.022776465 +0000 UTC m=+0.080086750 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 28 18:06:50 compute-0 podman[243626]: 2025-11-28 18:06:50.032315942 +0000 UTC m=+0.092324371 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, release=1755695350, distribution-scope=public, name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Nov 28 18:06:50 compute-0 podman[243628]: 2025-11-28 18:06:50.071736577 +0000 UTC m=+0.114169550 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.979 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.979 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.979 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.992 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf', 'name': 'vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.995 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '738e5649-3e79-434b-9fbe-4aff6d71b051', 'name': 'vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:06:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.999 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d10f9fc-89ea-4059-8532-7e0aec0791d6', 'name': 'test_0', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.999 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.999 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:51.999 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.000 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:06:52.000040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.023 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.024 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.025 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.049 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.049 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.050 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.077 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.077 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.077 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.078 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.078 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.078 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.078 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.078 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.078 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:06:52.078919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.141 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.142 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.142 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.203 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.204 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.204 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.275 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.275 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.275 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.276 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.276 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.276 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.276 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.276 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.276 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.277 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.latency volume: 301308176 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.277 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:06:52.276768) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.277 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.latency volume: 58590956 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.277 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.latency volume: 53252991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.277 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 351803974 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.278 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 86546736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.278 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 62239108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.278 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 284678818 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.278 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 69824352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.278 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 37055244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.279 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.279 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.279 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.279 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.279 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.279 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:06:52.279879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.283 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.286 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.289 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.289 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.290 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.290 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.290 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.290 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.290 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:06:52.290664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.312 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.334 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.355 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.355 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.356 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.356 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.356 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.356 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.356 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.356 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.357 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.357 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.357 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.358 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.358 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.358 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.358 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.359 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.359 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.360 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.360 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.360 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.360 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.360 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:06:52.356560) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.361 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.361 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.361 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.361 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.362 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.362 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.362 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.363 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:06:52.360832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.363 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.364 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.364 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.364 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.364 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.364 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.364 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.364 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:06:52.364703) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.365 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.365 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.365 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.366 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.366 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.366 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.366 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.366 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.366 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.latency volume: 402835350 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:06:52.366487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.367 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.latency volume: 7108483 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.367 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.367 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 951715343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.367 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 7967925 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.368 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.368 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 646402207 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.368 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 6041958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.368 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.369 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.369 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.369 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.369 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.370 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.370 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.370 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:06:52.370085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.370 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.371 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.371 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.371 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.371 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.372 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.372 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.372 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.373 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.373 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.373 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.373 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.373 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.373 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.373 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.373 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.bytes.delta volume: 182 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.374 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.374 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.374 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:06:52.373489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.374 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.375 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.375 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.375 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.375 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.375 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:06:52.375334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.375 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.376 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.376 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.376 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.376 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.376 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.376 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.376 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.377 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/cpu volume: 34570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:06:52.376915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.377 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/cpu volume: 34460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.377 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/cpu volume: 38270000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.377 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.378 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.378 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.378 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.378 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.378 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.378 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.378 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:06:52.378622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.379 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.379 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.379 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.379 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.379 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.379 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.380 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:06:52.379925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.380 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.380 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.381 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.381 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.381 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.381 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.381 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.381 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:06:52.381468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.382 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.382 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.382 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.382 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.382 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.382 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.382 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.382 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.383 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:06:52.382540) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.383 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.383 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.383 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.383 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.384 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.384 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.384 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.384 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.384 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.385 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.385 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.385 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.385 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:06:52.384211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.385 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.385 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.386 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:06:52.385855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.386 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.386 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.386 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.387 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.387 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.387 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.387 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.388 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.388 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.388 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.388 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.388 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.389 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.389 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.389 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.389 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:06:52.389182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.389 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.bytes.delta volume: 1209 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.389 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.390 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.390 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.390 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.390 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.390 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.390 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.391 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.391 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.391 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.incoming.bytes volume: 1738 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.391 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:06:52.391179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.391 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.392 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes volume: 2388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.392 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.392 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.392 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.392 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.392 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.392 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.393 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:06:52.392904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.393 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.393 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.394 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.394 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.394 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.394 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.394 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.394 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.394 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:06:52.394543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.395 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.395 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.395 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.395 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.396 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.396 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.396 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.396 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.396 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:06:52.396519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.397 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.397 15 DEBUG ceilometer.compute.pollsters [-] fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.397 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.397 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.398 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.398 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.398 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.398 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.399 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.400 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.400 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.401 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.401 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.402 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.402 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.402 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.403 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.403 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.403 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.403 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.404 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.404 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.404 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.405 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.405 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.405 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.407 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.407 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.407 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.408 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.408 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:06:52.408 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:06:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:52.612 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:06:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:52.613 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:06:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:06:52.614 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:06:53 compute-0 nova_compute[189296]: 2025-11-28 18:06:53.221 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:54 compute-0 nova_compute[189296]: 2025-11-28 18:06:54.092 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:56 compute-0 podman[243685]: 2025-11-28 18:06:56.019994757 +0000 UTC m=+0.071946267 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:06:56 compute-0 podman[243684]: 2025-11-28 18:06:56.024800582 +0000 UTC m=+0.081552226 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 18:06:58 compute-0 podman[243721]: 2025-11-28 18:06:58.036965153 +0000 UTC m=+0.082514758 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:06:58 compute-0 podman[243722]: 2025-11-28 18:06:58.050537814 +0000 UTC m=+0.086741008 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release-0.7.12=, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Nov 28 18:06:58 compute-0 nova_compute[189296]: 2025-11-28 18:06:58.224 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:59 compute-0 nova_compute[189296]: 2025-11-28 18:06:59.094 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:06:59 compute-0 podman[203494]: time="2025-11-28T18:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:06:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:06:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4770 "" "Go-http-client/1.1"
Nov 28 18:07:01 compute-0 podman[243761]: 2025-11-28 18:07:01.078622145 +0000 UTC m=+0.125373854 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 18:07:01 compute-0 openstack_network_exporter[205632]: ERROR   18:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:07:01 compute-0 openstack_network_exporter[205632]: ERROR   18:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:07:01 compute-0 openstack_network_exporter[205632]: ERROR   18:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:07:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:07:01 compute-0 openstack_network_exporter[205632]: ERROR   18:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:07:01 compute-0 openstack_network_exporter[205632]: ERROR   18:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:07:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:07:01 compute-0 nova_compute[189296]: 2025-11-28 18:07:01.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:03 compute-0 nova_compute[189296]: 2025-11-28 18:07:03.227 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:04 compute-0 nova_compute[189296]: 2025-11-28 18:07:04.097 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:05 compute-0 nova_compute[189296]: 2025-11-28 18:07:05.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:05 compute-0 nova_compute[189296]: 2025-11-28 18:07:05.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:07:05 compute-0 nova_compute[189296]: 2025-11-28 18:07:05.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:07:06 compute-0 nova_compute[189296]: 2025-11-28 18:07:06.677 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:07:06 compute-0 nova_compute[189296]: 2025-11-28 18:07:06.678 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:07:06 compute-0 nova_compute[189296]: 2025-11-28 18:07:06.679 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:07:06 compute-0 nova_compute[189296]: 2025-11-28 18:07:06.680 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:07:08 compute-0 nova_compute[189296]: 2025-11-28 18:07:08.230 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.099 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.817 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.832 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.832 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.833 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.834 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.835 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.835 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.835 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.835 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.836 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.858 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.859 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.859 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.859 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:07:09 compute-0 nova_compute[189296]: 2025-11-28 18:07:09.978 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.049 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.051 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.112 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.114 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.173 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.174 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.232 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.243 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.303 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.305 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.367 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.368 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.427 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.428 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.483 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.489 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.570 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.570 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.624 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.626 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.686 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.688 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:07:10 compute-0 nova_compute[189296]: 2025-11-28 18:07:10.771 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:07:11 compute-0 podman[243823]: 2025-11-28 18:07:11.040905574 +0000 UTC m=+0.103519427 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.128 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.130 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4769MB free_disk=72.34053421020508GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.131 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.131 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.233 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.233 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.233 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 738e5649-3e79-434b-9fbe-4aff6d71b051 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.234 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.234 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.314 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.331 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.332 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:07:11 compute-0 nova_compute[189296]: 2025-11-28 18:07:11.332 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:07:13 compute-0 nova_compute[189296]: 2025-11-28 18:07:13.231 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:14 compute-0 nova_compute[189296]: 2025-11-28 18:07:14.101 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:16 compute-0 nova_compute[189296]: 2025-11-28 18:07:16.123 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:18 compute-0 nova_compute[189296]: 2025-11-28 18:07:18.233 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:19 compute-0 nova_compute[189296]: 2025-11-28 18:07:19.103 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:21 compute-0 podman[243849]: 2025-11-28 18:07:21.044522074 +0000 UTC m=+0.097928924 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 18:07:21 compute-0 podman[243848]: 2025-11-28 18:07:21.051939489 +0000 UTC m=+0.107017299 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 18:07:21 compute-0 podman[243850]: 2025-11-28 18:07:21.063051203 +0000 UTC m=+0.107369307 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:07:23 compute-0 nova_compute[189296]: 2025-11-28 18:07:23.236 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:24 compute-0 nova_compute[189296]: 2025-11-28 18:07:24.106 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:27 compute-0 podman[243906]: 2025-11-28 18:07:27.043332733 +0000 UTC m=+0.095679260 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:07:27 compute-0 podman[243907]: 2025-11-28 18:07:27.061955956 +0000 UTC m=+0.104897860 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 28 18:07:28 compute-0 nova_compute[189296]: 2025-11-28 18:07:28.238 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:29 compute-0 podman[243943]: 2025-11-28 18:07:29.012154017 +0000 UTC m=+0.065331401 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:07:29 compute-0 podman[243944]: 2025-11-28 18:07:29.064883787 +0000 UTC m=+0.110108652 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 18:07:29 compute-0 nova_compute[189296]: 2025-11-28 18:07:29.107 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:29 compute-0 podman[203494]: time="2025-11-28T18:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:07:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:07:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4786 "" "Go-http-client/1.1"
Nov 28 18:07:31 compute-0 openstack_network_exporter[205632]: ERROR   18:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:07:31 compute-0 openstack_network_exporter[205632]: ERROR   18:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:07:31 compute-0 openstack_network_exporter[205632]: ERROR   18:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:07:31 compute-0 openstack_network_exporter[205632]: ERROR   18:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:07:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:07:31 compute-0 openstack_network_exporter[205632]: ERROR   18:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:07:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:07:32 compute-0 podman[243984]: 2025-11-28 18:07:32.067481184 +0000 UTC m=+0.119430735 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:07:33 compute-0 nova_compute[189296]: 2025-11-28 18:07:33.241 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:34 compute-0 nova_compute[189296]: 2025-11-28 18:07:34.110 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:38 compute-0 nova_compute[189296]: 2025-11-28 18:07:38.245 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:39 compute-0 nova_compute[189296]: 2025-11-28 18:07:39.113 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:42 compute-0 podman[244008]: 2025-11-28 18:07:42.023631857 +0000 UTC m=+0.073475164 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:07:43 compute-0 nova_compute[189296]: 2025-11-28 18:07:43.246 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:44 compute-0 nova_compute[189296]: 2025-11-28 18:07:44.115 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:48 compute-0 nova_compute[189296]: 2025-11-28 18:07:48.249 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:49 compute-0 nova_compute[189296]: 2025-11-28 18:07:49.117 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:52 compute-0 podman[244034]: 2025-11-28 18:07:52.03200752 +0000 UTC m=+0.088720896 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=f26160204c78771e78cdd2489258319b, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 28 18:07:52 compute-0 podman[244033]: 2025-11-28 18:07:52.042653272 +0000 UTC m=+0.104492359 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Nov 28 18:07:52 compute-0 podman[244035]: 2025-11-28 18:07:52.048331636 +0000 UTC m=+0.091750277 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 28 18:07:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:07:52.614 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:07:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:07:52.614 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:07:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:07:52.615 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:07:53 compute-0 nova_compute[189296]: 2025-11-28 18:07:53.251 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:53 compute-0 nova_compute[189296]: 2025-11-28 18:07:53.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:53 compute-0 nova_compute[189296]: 2025-11-28 18:07:53.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 28 18:07:54 compute-0 nova_compute[189296]: 2025-11-28 18:07:54.118 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:58 compute-0 podman[244091]: 2025-11-28 18:07:58.014613476 +0000 UTC m=+0.072930842 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Nov 28 18:07:58 compute-0 podman[244092]: 2025-11-28 18:07:58.023619159 +0000 UTC m=+0.077267474 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 28 18:07:58 compute-0 nova_compute[189296]: 2025-11-28 18:07:58.254 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:59 compute-0 nova_compute[189296]: 2025-11-28 18:07:59.120 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:07:59 compute-0 nova_compute[189296]: 2025-11-28 18:07:59.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:07:59 compute-0 podman[203494]: time="2025-11-28T18:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:07:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:07:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Nov 28 18:08:00 compute-0 podman[244130]: 2025-11-28 18:08:00.032305018 +0000 UTC m=+0.084676100 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:08:00 compute-0 podman[244131]: 2025-11-28 18:08:00.044078087 +0000 UTC m=+0.092256730 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., version=9.4, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Nov 28 18:08:01 compute-0 openstack_network_exporter[205632]: ERROR   18:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:08:01 compute-0 openstack_network_exporter[205632]: ERROR   18:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:08:01 compute-0 openstack_network_exporter[205632]: ERROR   18:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:08:01 compute-0 openstack_network_exporter[205632]: ERROR   18:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:08:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:08:01 compute-0 openstack_network_exporter[205632]: ERROR   18:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:08:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:08:01 compute-0 nova_compute[189296]: 2025-11-28 18:08:01.636 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:01 compute-0 nova_compute[189296]: 2025-11-28 18:08:01.637 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 28 18:08:01 compute-0 nova_compute[189296]: 2025-11-28 18:08:01.652 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 28 18:08:02 compute-0 nova_compute[189296]: 2025-11-28 18:08:02.637 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:03 compute-0 podman[244171]: 2025-11-28 18:08:03.055337498 +0000 UTC m=+0.114052707 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:08:03 compute-0 nova_compute[189296]: 2025-11-28 18:08:03.257 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:04 compute-0 nova_compute[189296]: 2025-11-28 18:08:04.123 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:05 compute-0 nova_compute[189296]: 2025-11-28 18:08:05.974 189300 DEBUG nova.compute.manager [req-2f8a21a7-161c-416d-999f-71bd0f04dd53 req-796c28be-8926-4e4f-b3a2-35e201a1433f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Received event network-changed-7b3b067b-5dff-4342-98fa-c66e054d025d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:08:05 compute-0 nova_compute[189296]: 2025-11-28 18:08:05.975 189300 DEBUG nova.compute.manager [req-2f8a21a7-161c-416d-999f-71bd0f04dd53 req-796c28be-8926-4e4f-b3a2-35e201a1433f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Refreshing instance network info cache due to event network-changed-7b3b067b-5dff-4342-98fa-c66e054d025d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:08:05 compute-0 nova_compute[189296]: 2025-11-28 18:08:05.976 189300 DEBUG oslo_concurrency.lockutils [req-2f8a21a7-161c-416d-999f-71bd0f04dd53 req-796c28be-8926-4e4f-b3a2-35e201a1433f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:08:05 compute-0 nova_compute[189296]: 2025-11-28 18:08:05.976 189300 DEBUG oslo_concurrency.lockutils [req-2f8a21a7-161c-416d-999f-71bd0f04dd53 req-796c28be-8926-4e4f-b3a2-35e201a1433f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:08:05 compute-0 nova_compute[189296]: 2025-11-28 18:08:05.977 189300 DEBUG nova.network.neutron [req-2f8a21a7-161c-416d-999f-71bd0f04dd53 req-796c28be-8926-4e4f-b3a2-35e201a1433f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Refreshing network info cache for port 7b3b067b-5dff-4342-98fa-c66e054d025d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.724 189300 DEBUG oslo_concurrency.lockutils [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.725 189300 DEBUG oslo_concurrency.lockutils [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.725 189300 DEBUG oslo_concurrency.lockutils [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.726 189300 DEBUG oslo_concurrency.lockutils [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.726 189300 DEBUG oslo_concurrency.lockutils [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.728 189300 INFO nova.compute.manager [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Terminating instance#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.730 189300 DEBUG nova.compute.manager [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:08:06 compute-0 kernel: tap7b3b067b-5d (unregistering): left promiscuous mode
Nov 28 18:08:06 compute-0 NetworkManager[56307]: <info>  [1764353286.7752] device (tap7b3b067b-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:08:06 compute-0 ovn_controller[97771]: 2025-11-28T18:08:06Z|00061|binding|INFO|Releasing lport 7b3b067b-5dff-4342-98fa-c66e054d025d from this chassis (sb_readonly=0)
Nov 28 18:08:06 compute-0 ovn_controller[97771]: 2025-11-28T18:08:06Z|00062|binding|INFO|Setting lport 7b3b067b-5dff-4342-98fa-c66e054d025d down in Southbound
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.786 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:06 compute-0 ovn_controller[97771]: 2025-11-28T18:08:06Z|00063|binding|INFO|Removing iface tap7b3b067b-5d ovn-installed in OVS
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.790 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.804 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:06 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.828 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:01:76 192.168.0.178'], port_security=['fa:16:3e:7e:01:76 192.168.0.178'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-po7lv7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-port-25v5lqpwleyb', 'neutron:cidrs': '192.168.0.178/24', 'neutron:device_id': 'fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-po7lv7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-port-25v5lqpwleyb', 'neutron:project_id': '79ee04b003ca4eb8a045699c7852a8b0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a309e23b-efb6-4377-8050-5a658324ee07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37710b57-0bdd-4c1a-aa8d-366aa83fbf51, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=7b3b067b-5dff-4342-98fa-c66e054d025d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:08:06 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 10.303s CPU time.
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.830 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3b067b-5dff-4342-98fa-c66e054d025d in datapath 5cc11a5f-7338-49fd-ba02-2db7ff676c4f unbound from our chassis#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.831 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc11a5f-7338-49fd-ba02-2db7ff676c4f#033[00m
Nov 28 18:08:06 compute-0 systemd-machined[155703]: Machine qemu-4-instance-00000004 terminated.
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.848 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[ec12c89b-e72f-4aa9-93bb-e775c980b352]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.856 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.879 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[402d27a1-1301-4345-90c5-71ab99fb6c1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.883 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[842ec79c-e30c-4ea2-a155-7413fb54da3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.913 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[c4dea0e1-5372-463b-b345-77d5612e2822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.930 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[7c804068-1284-4a4e-ac61-d7da2b425b6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc11a5f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:38:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 17, 'rx_bytes': 532, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 17, 'rx_bytes': 532, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370971, 'reachable_time': 43855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244211, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.946 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[441b6b09-e5ea-4099-bc70-eb76e639acaf]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370983, 'tstamp': 370983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244212, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370986, 'tstamp': 370986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244212, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.948 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc11a5f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.950 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.957 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.957 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc11a5f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.957 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.958 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc11a5f-70, col_values=(('external_ids', {'iface-id': '467e3797-177d-4174-b963-0efbd15595b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.958 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.980 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:08:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:06.981 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:08:06 compute-0 nova_compute[189296]: 2025-11-28 18:08:06.981 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.010 189300 INFO nova.virt.libvirt.driver [-] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Instance destroyed successfully.#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.010 189300 DEBUG nova.objects.instance [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'resources' on Instance uuid fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.058 189300 DEBUG nova.virt.libvirt.vif [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:02:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-myqv6vc5iwu6-3wmt66b4jk5x-vnf-uuehi3czwwyv',id=4,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:02:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-z06d29og',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:02:23Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyODY2NTYxNDM0ODA1NTkwNzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI4NjY1NjE0MzQ4MDU1OTA3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyODY2NTYxNDM0ODA1NTkwNzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 28 18:08:07 compute-0 nova_compute[189296]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI4NjY1NjE0MzQ4MDU1OTA3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyODY2NTYxNDM0ODA1NTkwNzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04Mjg2NjU2MTQzNDgwNTU5MDcyPT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.058 189300 DEBUG nova.network.os_vif_util [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.059 189300 DEBUG nova.network.os_vif_util [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:01:76,bridge_name='br-int',has_traffic_filtering=True,id=7b3b067b-5dff-4342-98fa-c66e054d025d,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7b3b067b-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.060 189300 DEBUG os_vif [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:01:76,bridge_name='br-int',has_traffic_filtering=True,id=7b3b067b-5dff-4342-98fa-c66e054d025d,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7b3b067b-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.061 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.062 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b3b067b-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.063 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.065 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.068 189300 INFO os_vif [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:01:76,bridge_name='br-int',has_traffic_filtering=True,id=7b3b067b-5dff-4342-98fa-c66e054d025d,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7b3b067b-5d')#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.069 189300 INFO nova.virt.libvirt.driver [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Deleting instance files /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf_del#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.069 189300 INFO nova.virt.libvirt.driver [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Deletion of /var/lib/nova/instances/fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf_del complete#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.139 189300 INFO nova.compute.manager [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.139 189300 DEBUG oslo.service.loopingcall [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.139 189300 DEBUG nova.compute.manager [-] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.140 189300 DEBUG nova.network.neutron [-] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:08:07 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 18:08:07.058 189300 DEBUG nova.virt.libvirt.vif [None req-a82e6dc5-41 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.473 189300 DEBUG nova.network.neutron [req-2f8a21a7-161c-416d-999f-71bd0f04dd53 req-796c28be-8926-4e4f-b3a2-35e201a1433f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Updated VIF entry in instance network info cache for port 7b3b067b-5dff-4342-98fa-c66e054d025d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.474 189300 DEBUG nova.network.neutron [req-2f8a21a7-161c-416d-999f-71bd0f04dd53 req-796c28be-8926-4e4f-b3a2-35e201a1433f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Updating instance_info_cache with network_info: [{"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.477 189300 DEBUG nova.compute.manager [req-b5b0a5ba-3be8-4722-b494-747c7c4752cd req-9eefc615-7da4-484e-bf75-7ee9bc9c8ece 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Received event network-vif-unplugged-7b3b067b-5dff-4342-98fa-c66e054d025d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.477 189300 DEBUG oslo_concurrency.lockutils [req-b5b0a5ba-3be8-4722-b494-747c7c4752cd req-9eefc615-7da4-484e-bf75-7ee9bc9c8ece 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.478 189300 DEBUG oslo_concurrency.lockutils [req-b5b0a5ba-3be8-4722-b494-747c7c4752cd req-9eefc615-7da4-484e-bf75-7ee9bc9c8ece 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.478 189300 DEBUG oslo_concurrency.lockutils [req-b5b0a5ba-3be8-4722-b494-747c7c4752cd req-9eefc615-7da4-484e-bf75-7ee9bc9c8ece 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.478 189300 DEBUG nova.compute.manager [req-b5b0a5ba-3be8-4722-b494-747c7c4752cd req-9eefc615-7da4-484e-bf75-7ee9bc9c8ece 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] No waiting events found dispatching network-vif-unplugged-7b3b067b-5dff-4342-98fa-c66e054d025d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.478 189300 DEBUG nova.compute.manager [req-b5b0a5ba-3be8-4722-b494-747c7c4752cd req-9eefc615-7da4-484e-bf75-7ee9bc9c8ece 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Received event network-vif-unplugged-7b3b067b-5dff-4342-98fa-c66e054d025d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.553 189300 DEBUG oslo_concurrency.lockutils [req-2f8a21a7-161c-416d-999f-71bd0f04dd53 req-796c28be-8926-4e4f-b3a2-35e201a1433f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.553 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:08:07 compute-0 nova_compute[189296]: 2025-11-28 18:08:07.553 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:08:08 compute-0 nova_compute[189296]: 2025-11-28 18:08:08.260 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:09 compute-0 nova_compute[189296]: 2025-11-28 18:08:09.970 189300 DEBUG nova.compute.manager [req-4d299684-9f79-4938-8fc6-622aaac88c27 req-ecb87843-94f1-4748-b01a-e488a0ae0fb7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Received event network-vif-plugged-7b3b067b-5dff-4342-98fa-c66e054d025d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:08:09 compute-0 nova_compute[189296]: 2025-11-28 18:08:09.970 189300 DEBUG oslo_concurrency.lockutils [req-4d299684-9f79-4938-8fc6-622aaac88c27 req-ecb87843-94f1-4748-b01a-e488a0ae0fb7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:09 compute-0 nova_compute[189296]: 2025-11-28 18:08:09.970 189300 DEBUG oslo_concurrency.lockutils [req-4d299684-9f79-4938-8fc6-622aaac88c27 req-ecb87843-94f1-4748-b01a-e488a0ae0fb7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:09 compute-0 nova_compute[189296]: 2025-11-28 18:08:09.971 189300 DEBUG oslo_concurrency.lockutils [req-4d299684-9f79-4938-8fc6-622aaac88c27 req-ecb87843-94f1-4748-b01a-e488a0ae0fb7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:09 compute-0 nova_compute[189296]: 2025-11-28 18:08:09.971 189300 DEBUG nova.compute.manager [req-4d299684-9f79-4938-8fc6-622aaac88c27 req-ecb87843-94f1-4748-b01a-e488a0ae0fb7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] No waiting events found dispatching network-vif-plugged-7b3b067b-5dff-4342-98fa-c66e054d025d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:08:09 compute-0 nova_compute[189296]: 2025-11-28 18:08:09.971 189300 WARNING nova.compute.manager [req-4d299684-9f79-4938-8fc6-622aaac88c27 req-ecb87843-94f1-4748-b01a-e488a0ae0fb7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Received unexpected event network-vif-plugged-7b3b067b-5dff-4342-98fa-c66e054d025d for instance with vm_state active and task_state deleting.#033[00m
Nov 28 18:08:09 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:09.983 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.556 189300 DEBUG nova.network.neutron [-] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.578 189300 INFO nova.compute.manager [-] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Took 3.44 seconds to deallocate network for instance.#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.625 189300 DEBUG oslo_concurrency.lockutils [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.626 189300 DEBUG oslo_concurrency.lockutils [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.735 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Updating instance_info_cache with network_info: [{"id": "7b3b067b-5dff-4342-98fa-c66e054d025d", "address": "fa:16:3e:7e:01:76", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b067b-5d", "ovs_interfaceid": "7b3b067b-5dff-4342-98fa-c66e054d025d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.751 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.751 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.751 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.751 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.752 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.752 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.752 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.752 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.753 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.774 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.954 189300 DEBUG nova.compute.provider_tree [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:08:10 compute-0 nova_compute[189296]: 2025-11-28 18:08:10.974 189300 DEBUG nova.scheduler.client.report [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.013 189300 DEBUG oslo_concurrency.lockutils [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.016 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.016 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.016 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.037 189300 INFO nova.scheduler.client.report [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Deleted allocations for instance fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.151 189300 DEBUG oslo_concurrency.lockutils [None req-a82e6dc5-41ac-4426-905a-39979ed0256b 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.160 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.246 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.247 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.343 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.344 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.401 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.403 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.484 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.494 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.576 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.578 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.641 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.642 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.702 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.704 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:11 compute-0 nova_compute[189296]: 2025-11-28 18:08:11.762 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.064 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.089 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.090 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4946MB free_disk=72.36306762695312GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.090 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.090 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.148 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.148 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 738e5649-3e79-434b-9fbe-4aff6d71b051 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.148 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.148 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.206 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.218 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.236 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:08:12 compute-0 nova_compute[189296]: 2025-11-28 18:08:12.237 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:13 compute-0 podman[244259]: 2025-11-28 18:08:13.032991522 +0000 UTC m=+0.076249670 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:08:13 compute-0 nova_compute[189296]: 2025-11-28 18:08:13.261 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.459 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.486 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.511 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Triggering sync for uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.512 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Triggering sync for uuid 738e5649-3e79-434b-9fbe-4aff6d71b051 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.513 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.514 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.514 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "738e5649-3e79-434b-9fbe-4aff6d71b051" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.515 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.553 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.555 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:15 compute-0 nova_compute[189296]: 2025-11-28 18:08:15.656 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:08:17 compute-0 nova_compute[189296]: 2025-11-28 18:08:17.068 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:18 compute-0 nova_compute[189296]: 2025-11-28 18:08:18.265 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:22 compute-0 nova_compute[189296]: 2025-11-28 18:08:22.009 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764353287.0066545, fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:08:22 compute-0 nova_compute[189296]: 2025-11-28 18:08:22.009 189300 INFO nova.compute.manager [-] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:08:22 compute-0 nova_compute[189296]: 2025-11-28 18:08:22.028 189300 DEBUG nova.compute.manager [None req-8bacc14b-6e50-4509-8804-7323ef8f0216 - - - - - -] [instance: fd0fda07-82a4-4bbf-b4d3-ef4f481ce1cf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:08:22 compute-0 nova_compute[189296]: 2025-11-28 18:08:22.072 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:23 compute-0 podman[244284]: 2025-11-28 18:08:23.048961305 +0000 UTC m=+0.102926913 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, version=9.6, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Nov 28 18:08:23 compute-0 podman[244286]: 2025-11-28 18:08:23.066063541 +0000 UTC m=+0.104695795 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 28 18:08:23 compute-0 podman[244285]: 2025-11-28 18:08:23.067819172 +0000 UTC m=+0.113202306 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 28 18:08:23 compute-0 nova_compute[189296]: 2025-11-28 18:08:23.268 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:27 compute-0 nova_compute[189296]: 2025-11-28 18:08:27.074 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:28 compute-0 nova_compute[189296]: 2025-11-28 18:08:28.270 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:29 compute-0 podman[244339]: 2025-11-28 18:08:29.020872037 +0000 UTC m=+0.078120544 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 28 18:08:29 compute-0 podman[244340]: 2025-11-28 18:08:29.061267166 +0000 UTC m=+0.101564731 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 28 18:08:29 compute-0 podman[203494]: time="2025-11-28T18:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:08:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:08:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4780 "" "Go-http-client/1.1"
Nov 28 18:08:31 compute-0 podman[244379]: 2025-11-28 18:08:31.02678426 +0000 UTC m=+0.080357827 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-type=git, version=9.4, architecture=x86_64, name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, managed_by=edpm_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 18:08:31 compute-0 podman[244378]: 2025-11-28 18:08:31.045809091 +0000 UTC m=+0.103870205 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:08:31 compute-0 openstack_network_exporter[205632]: ERROR   18:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:08:31 compute-0 openstack_network_exporter[205632]: ERROR   18:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:08:31 compute-0 openstack_network_exporter[205632]: ERROR   18:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:08:31 compute-0 openstack_network_exporter[205632]: ERROR   18:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:08:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:08:31 compute-0 openstack_network_exporter[205632]: ERROR   18:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:08:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:08:32 compute-0 nova_compute[189296]: 2025-11-28 18:08:32.076 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:33 compute-0 nova_compute[189296]: 2025-11-28 18:08:33.271 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:34 compute-0 podman[244417]: 2025-11-28 18:08:34.057606995 +0000 UTC m=+0.113884382 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 28 18:08:37 compute-0 nova_compute[189296]: 2025-11-28 18:08:37.079 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:38 compute-0 nova_compute[189296]: 2025-11-28 18:08:38.273 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:42 compute-0 nova_compute[189296]: 2025-11-28 18:08:42.081 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:43 compute-0 ovn_controller[97771]: 2025-11-28T18:08:43Z|00064|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Nov 28 18:08:43 compute-0 nova_compute[189296]: 2025-11-28 18:08:43.275 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:43 compute-0 podman[244443]: 2025-11-28 18:08:43.992281009 +0000 UTC m=+0.054804051 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:08:47 compute-0 nova_compute[189296]: 2025-11-28 18:08:47.085 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:48 compute-0 nova_compute[189296]: 2025-11-28 18:08:48.279 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:50 compute-0 nova_compute[189296]: 2025-11-28 18:08:50.727 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "f4cd3d4f-2952-4e03-95f4-459cddcc17c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:50 compute-0 nova_compute[189296]: 2025-11-28 18:08:50.728 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f4cd3d4f-2952-4e03-95f4-459cddcc17c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:50 compute-0 nova_compute[189296]: 2025-11-28 18:08:50.772 189300 DEBUG nova.compute.manager [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:08:50 compute-0 nova_compute[189296]: 2025-11-28 18:08:50.868 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:50 compute-0 nova_compute[189296]: 2025-11-28 18:08:50.868 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:50 compute-0 nova_compute[189296]: 2025-11-28 18:08:50.877 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:08:50 compute-0 nova_compute[189296]: 2025-11-28 18:08:50.878 189300 INFO nova.compute.claims [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.028 189300 DEBUG nova.compute.provider_tree [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.042 189300 DEBUG nova.scheduler.client.report [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.062 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.063 189300 DEBUG nova.compute.manager [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.117 189300 DEBUG nova.compute.manager [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.147 189300 INFO nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.175 189300 DEBUG nova.compute.manager [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.245 189300 DEBUG nova.compute.manager [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.247 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.247 189300 INFO nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Creating image(s)#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.248 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.248 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.249 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.249 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "14d87f60afaabf504203a4757919b9a5f2b5b19a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:51 compute-0 nova_compute[189296]: 2025-11-28 18:08:51.250 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "14d87f60afaabf504203a4757919b9a5f2b5b19a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.979 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.980 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.980 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.981 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.987 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '738e5649-3e79-434b-9fbe-4aff6d71b051', 'name': 'vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.992 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d10f9fc-89ea-4059-8532-7e0aec0791d6', 'name': 'test_0', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.992 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.992 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.992 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.992 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:51.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:08:51.992827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.016 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.016 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.016 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.039 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.039 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.039 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.040 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.040 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.040 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.040 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.040 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.040 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:08:52.040937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.087 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.103 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.103 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.104 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.173 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.173 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.173 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.174 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.174 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.174 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.174 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.174 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.175 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.175 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 351803974 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:08:52.175039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.175 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 86546736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.175 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 62239108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.176 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 284678818 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.176 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 69824352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.176 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 37055244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.177 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.177 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.178 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.178 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.178 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.178 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:08:52.178463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.182 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.185 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.185 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.185 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.185 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.185 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.186 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.186 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:08:52.186086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.207 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.228 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.228 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.228 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.229 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.229 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.229 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.229 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.229 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:08:52.229266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.229 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.229 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.230 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.230 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.230 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.230 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.231 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.231 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.231 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.231 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.231 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:08:52.231697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.232 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.232 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.232 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.232 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.233 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.233 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.233 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.233 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.234 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.234 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.234 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.234 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.234 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:08:52.234434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.234 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.235 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.235 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.235 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.235 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.235 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.235 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.236 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 951715343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.236 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 7967925 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:08:52.235894) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.236 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.237 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 646402207 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.237 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 6041958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.237 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.238 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.238 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.238 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.238 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.238 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.238 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.238 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.239 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:08:52.238671) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.239 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.239 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.239 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.240 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.240 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.240 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.240 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.241 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.241 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.241 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.241 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:08:52.241275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.241 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.242 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.242 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.242 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.242 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.242 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.242 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.242 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:08:52.242689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.242 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.243 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.243 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.243 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.243 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.243 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.243 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.244 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.244 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/cpu volume: 35800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.244 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:08:52.244014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.244 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/cpu volume: 39640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.244 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.244 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.245 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.245 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.245 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.245 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.245 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.245 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.245 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:08:52.245543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.246 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.246 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.246 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.246 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.246 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:08:52.246577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.246 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.247 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.247 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.247 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.247 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.247 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.247 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.247 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:08:52.247772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.248 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.248 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.248 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.248 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.248 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.248 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.249 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.249 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:08:52.248793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.249 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.249 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.249 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.249 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.249 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.249 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.250 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.250 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.250 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:08:52.250013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.250 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.251 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.251 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.251 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.251 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.251 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.251 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.251 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:08:52.251419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.251 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.252 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.252 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.252 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.252 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.253 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.253 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.253 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.253 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.253 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.253 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.253 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:08:52.253462) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.254 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.254 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.254 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.254 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.254 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.254 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.254 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.254 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.254 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.254 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.255 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes volume: 2472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.255 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.255 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:08:52.254848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.256 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.256 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.256 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.256 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.256 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.256 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.257 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.257 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:08:52.256335) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.257 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.257 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.257 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.257 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.257 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.257 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:08:52.257764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.258 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.258 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.258 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.258 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.258 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.258 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.259 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.259 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:08:52.259040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.259 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.259 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.260 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.261 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.261 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.261 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:08:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.378 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.440 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a.part --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.441 189300 DEBUG nova.virt.images [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] e0d18d51-cf24-4766-b402-e269ffbff1cb was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.443 189300 DEBUG nova.privsep.utils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.443 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a.part /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.592 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a.part /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a.converted" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.596 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:52.615 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:52.616 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:08:52.616 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.676 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a.converted --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.678 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "14d87f60afaabf504203a4757919b9a5f2b5b19a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.693 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.752 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.753 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "14d87f60afaabf504203a4757919b9a5f2b5b19a" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.753 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "14d87f60afaabf504203a4757919b9a5f2b5b19a" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.765 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.836 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.837 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a,backing_fmt=raw /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.875 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a,backing_fmt=raw /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.876 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "14d87f60afaabf504203a4757919b9a5f2b5b19a" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.876 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.932 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.933 189300 DEBUG nova.virt.disk.api [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Checking if we can resize image /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.933 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.995 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.996 189300 DEBUG nova.virt.disk.api [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Cannot resize image /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:08:52 compute-0 nova_compute[189296]: 2025-11-28 18:08:52.997 189300 DEBUG nova.objects.instance [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'migration_context' on Instance uuid f4cd3d4f-2952-4e03-95f4-459cddcc17c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.017 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "/var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.017 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.018 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "/var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.030 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.086 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.087 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.087 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.098 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.155 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.157 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.203 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.eph0 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.204 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.204 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.261 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.262 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.262 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Ensure instance console log exists: /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.263 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.263 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.263 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.265 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T18:08:38Z,direct_url=<?>,disk_format='qcow2',id=e0d18d51-cf24-4766-b402-e269ffbff1cb,min_disk=0,min_ram=0,name='fvt_testing_image',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T18:08:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'e0d18d51-cf24-4766-b402-e269ffbff1cb'}], 'ephemerals': [{'device_type': 'disk', 'guest_format': None, 'size': 1, 'encryption_options': None, 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.271 189300 WARNING nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.277 189300 DEBUG nova.virt.libvirt.host [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.278 189300 DEBUG nova.virt.libvirt.host [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.280 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.283 189300 DEBUG nova.virt.libvirt.host [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.283 189300 DEBUG nova.virt.libvirt.host [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.284 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.284 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:08:46Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c21fda09-79e5-4447-b8a1-0a4b33ce3854',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-28T18:08:38Z,direct_url=<?>,disk_format='qcow2',id=e0d18d51-cf24-4766-b402-e269ffbff1cb,min_disk=0,min_ram=0,name='fvt_testing_image',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-28T18:08:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.285 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.285 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.285 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.285 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.286 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.286 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.286 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.287 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.287 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.287 189300 DEBUG nova.virt.hardware [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.291 189300 DEBUG nova.objects.instance [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'pci_devices' on Instance uuid f4cd3d4f-2952-4e03-95f4-459cddcc17c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.311 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <uuid>f4cd3d4f-2952-4e03-95f4-459cddcc17c9</uuid>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <name>instance-00000006</name>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <memory>524288</memory>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <nova:name>fvt_testing_server</nova:name>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:08:53</nova:creationTime>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <nova:flavor name="fvt_testing_flavor">
Nov 28 18:08:53 compute-0 nova_compute[189296]:        <nova:memory>512</nova:memory>
Nov 28 18:08:53 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:08:53 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:08:53 compute-0 nova_compute[189296]:        <nova:ephemeral>1</nova:ephemeral>
Nov 28 18:08:53 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:08:53 compute-0 nova_compute[189296]:        <nova:user uuid="6a35450c34a344b1a4e63aae1be2b971">admin</nova:user>
Nov 28 18:08:53 compute-0 nova_compute[189296]:        <nova:project uuid="79ee04b003ca4eb8a045699c7852a8b0">admin</nova:project>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="e0d18d51-cf24-4766-b402-e269ffbff1cb"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <nova:ports/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <system>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <entry name="serial">f4cd3d4f-2952-4e03-95f4-459cddcc17c9</entry>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <entry name="uuid">f4cd3d4f-2952-4e03-95f4-459cddcc17c9</entry>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    </system>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <os>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  </os>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <features>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  </features>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.eph0"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <target dev="vdb" bus="virtio"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.config"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/console.log" append="off"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <video>
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    </video>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:08:53 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:08:53 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:08:53 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:08:53 compute-0 nova_compute[189296]: </domain>
Nov 28 18:08:53 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.368 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.368 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.368 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.369 189300 INFO nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Using config drive#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.743 189300 INFO nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Creating config drive at /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.config#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.748 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptpy7q529 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:08:53 compute-0 nova_compute[189296]: 2025-11-28 18:08:53.873 189300 DEBUG oslo_concurrency.processutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptpy7q529" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:08:54 compute-0 systemd-machined[155703]: New machine qemu-6-instance-00000006.
Nov 28 18:08:54 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 28 18:08:54 compute-0 podman[244520]: 2025-11-28 18:08:54.065996042 +0000 UTC m=+0.112605302 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=f26160204c78771e78cdd2489258319b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 28 18:08:54 compute-0 podman[244521]: 2025-11-28 18:08:54.072989667 +0000 UTC m=+0.104814867 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 18:08:54 compute-0 podman[244519]: 2025-11-28 18:08:54.07644323 +0000 UTC m=+0.114662771 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.820 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353334.8192375, f4cd3d4f-2952-4e03-95f4-459cddcc17c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.822 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.827 189300 DEBUG nova.compute.manager [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.828 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.834 189300 INFO nova.virt.libvirt.driver [-] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Instance spawned successfully.#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.834 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.844 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:08:54 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.855 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.865 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.866 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.866 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.867 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.868 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.869 189300 DEBUG nova.virt.libvirt.driver [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:08:54 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.878 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.879 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353334.827041, f4cd3d4f-2952-4e03-95f4-459cddcc17c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.879 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] VM Started (Lifecycle Event)#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.906 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.912 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.936 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.943 189300 INFO nova.compute.manager [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Took 3.70 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:08:54 compute-0 nova_compute[189296]: 2025-11-28 18:08:54.944 189300 DEBUG nova.compute.manager [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:08:55 compute-0 nova_compute[189296]: 2025-11-28 18:08:55.017 189300 INFO nova.compute.manager [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Took 4.19 seconds to build instance.#033[00m
Nov 28 18:08:55 compute-0 nova_compute[189296]: 2025-11-28 18:08:55.050 189300 DEBUG oslo_concurrency.lockutils [None req-2ed9aaa9-e89c-4ab4-b8fe-7290dac74c2f 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f4cd3d4f-2952-4e03-95f4-459cddcc17c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:08:57 compute-0 nova_compute[189296]: 2025-11-28 18:08:57.090 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:58 compute-0 nova_compute[189296]: 2025-11-28 18:08:58.284 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:08:59 compute-0 podman[203494]: time="2025-11-28T18:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:08:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:08:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4780 "" "Go-http-client/1.1"
Nov 28 18:09:00 compute-0 podman[244619]: 2025-11-28 18:09:00.031108723 +0000 UTC m=+0.074259503 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 28 18:09:00 compute-0 podman[244620]: 2025-11-28 18:09:00.046751274 +0000 UTC m=+0.096329776 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 28 18:09:01 compute-0 openstack_network_exporter[205632]: ERROR   18:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:09:01 compute-0 openstack_network_exporter[205632]: ERROR   18:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:09:01 compute-0 openstack_network_exporter[205632]: ERROR   18:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:09:01 compute-0 openstack_network_exporter[205632]: ERROR   18:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:09:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:09:01 compute-0 openstack_network_exporter[205632]: ERROR   18:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:09:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:09:01 compute-0 podman[244653]: 2025-11-28 18:09:01.993030012 +0000 UTC m=+0.056347927 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 18:09:02 compute-0 podman[244654]: 2025-11-28 18:09:02.01654812 +0000 UTC m=+0.074999240 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-container, version=9.4, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm)
Nov 28 18:09:02 compute-0 nova_compute[189296]: 2025-11-28 18:09:02.093 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:03 compute-0 nova_compute[189296]: 2025-11-28 18:09:03.286 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:04 compute-0 nova_compute[189296]: 2025-11-28 18:09:04.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:09:05 compute-0 podman[244695]: 2025-11-28 18:09:05.04796606 +0000 UTC m=+0.108890945 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:09:07 compute-0 nova_compute[189296]: 2025-11-28 18:09:07.094 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:07 compute-0 nova_compute[189296]: 2025-11-28 18:09:07.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:09:07 compute-0 nova_compute[189296]: 2025-11-28 18:09:07.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.074 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.075 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.075 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.289 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.713 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "f4cd3d4f-2952-4e03-95f4-459cddcc17c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.714 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f4cd3d4f-2952-4e03-95f4-459cddcc17c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.714 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "f4cd3d4f-2952-4e03-95f4-459cddcc17c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.714 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f4cd3d4f-2952-4e03-95f4-459cddcc17c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.715 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f4cd3d4f-2952-4e03-95f4-459cddcc17c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.716 189300 INFO nova.compute.manager [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Terminating instance#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.717 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "refresh_cache-f4cd3d4f-2952-4e03-95f4-459cddcc17c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.717 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquired lock "refresh_cache-f4cd3d4f-2952-4e03-95f4-459cddcc17c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:09:08 compute-0 nova_compute[189296]: 2025-11-28 18:09:08.717 189300 DEBUG nova.network.neutron [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:09:09 compute-0 nova_compute[189296]: 2025-11-28 18:09:09.098 189300 DEBUG nova.network.neutron [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:09:09 compute-0 nova_compute[189296]: 2025-11-28 18:09:09.595 189300 DEBUG nova.network.neutron [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:09:09 compute-0 nova_compute[189296]: 2025-11-28 18:09:09.616 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Releasing lock "refresh_cache-f4cd3d4f-2952-4e03-95f4-459cddcc17c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:09:09 compute-0 nova_compute[189296]: 2025-11-28 18:09:09.617 189300 DEBUG nova.compute.manager [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:09:09 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 28 18:09:09 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 15.794s CPU time.
Nov 28 18:09:09 compute-0 systemd-machined[155703]: Machine qemu-6-instance-00000006 terminated.
Nov 28 18:09:09 compute-0 nova_compute[189296]: 2025-11-28 18:09:09.865 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Updating instance_info_cache with network_info: [{"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:09:09 compute-0 nova_compute[189296]: 2025-11-28 18:09:09.917 189300 INFO nova.virt.libvirt.driver [-] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Instance destroyed successfully.#033[00m
Nov 28 18:09:09 compute-0 nova_compute[189296]: 2025-11-28 18:09:09.918 189300 DEBUG nova.objects.instance [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'resources' on Instance uuid f4cd3d4f-2952-4e03-95f4-459cddcc17c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.402 189300 INFO nova.virt.libvirt.driver [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Deleting instance files /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9_del#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.404 189300 INFO nova.virt.libvirt.driver [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Deletion of /var/lib/nova/instances/f4cd3d4f-2952-4e03-95f4-459cddcc17c9_del complete#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.408 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.408 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.409 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.409 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.409 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.409 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.410 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.410 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.463 189300 INFO nova.compute.manager [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.464 189300 DEBUG oslo.service.loopingcall [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.464 189300 DEBUG nova.compute.manager [-] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:09:10 compute-0 nova_compute[189296]: 2025-11-28 18:09:10.464 189300 DEBUG nova.network.neutron [-] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.080 189300 DEBUG nova.network.neutron [-] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.103 189300 DEBUG nova.network.neutron [-] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.122 189300 INFO nova.compute.manager [-] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Took 0.66 seconds to deallocate network for instance.#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.230 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.231 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.319 189300 DEBUG nova.compute.provider_tree [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.347 189300 DEBUG nova.scheduler.client.report [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.386 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.414 189300 INFO nova.scheduler.client.report [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Deleted allocations for instance f4cd3d4f-2952-4e03-95f4-459cddcc17c9#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.516 189300 DEBUG oslo_concurrency.lockutils [None req-cb6e50e6-b7f3-4750-8819-7f8767b8a82a 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "f4cd3d4f-2952-4e03-95f4-459cddcc17c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.649 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.650 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.650 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.651 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.836 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.896 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.898 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.955 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:09:11 compute-0 nova_compute[189296]: 2025-11-28 18:09:11.957 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.039 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.041 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.096 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.124 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.132 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.193 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.194 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.251 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.252 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.335 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.336 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.414 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.780 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.781 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4868MB free_disk=72.33572769165039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.781 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.782 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.859 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.859 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 738e5649-3e79-434b-9fbe-4aff6d71b051 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.859 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.860 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.918 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.931 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.951 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:09:12 compute-0 nova_compute[189296]: 2025-11-28 18:09:12.951 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:09:13 compute-0 nova_compute[189296]: 2025-11-28 18:09:13.292 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:14 compute-0 podman[244758]: 2025-11-28 18:09:14.762766708 +0000 UTC m=+0.087064446 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:09:17 compute-0 nova_compute[189296]: 2025-11-28 18:09:17.099 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:18 compute-0 nova_compute[189296]: 2025-11-28 18:09:18.295 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:18 compute-0 nova_compute[189296]: 2025-11-28 18:09:18.951 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:09:22 compute-0 nova_compute[189296]: 2025-11-28 18:09:22.102 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:23 compute-0 nova_compute[189296]: 2025-11-28 18:09:23.297 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:24 compute-0 nova_compute[189296]: 2025-11-28 18:09:24.912 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764353349.9110177, f4cd3d4f-2952-4e03-95f4-459cddcc17c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:09:24 compute-0 nova_compute[189296]: 2025-11-28 18:09:24.913 189300 INFO nova.compute.manager [-] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:09:24 compute-0 nova_compute[189296]: 2025-11-28 18:09:24.948 189300 DEBUG nova.compute.manager [None req-45e06401-4ebc-41ca-a26f-168a8f607513 - - - - - -] [instance: f4cd3d4f-2952-4e03-95f4-459cddcc17c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:09:25 compute-0 podman[244784]: 2025-11-28 18:09:25.015025406 +0000 UTC m=+0.071948918 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 18:09:25 compute-0 podman[244785]: 2025-11-28 18:09:25.015042127 +0000 UTC m=+0.067652247 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 28 18:09:25 compute-0 podman[244786]: 2025-11-28 18:09:25.043331467 +0000 UTC m=+0.080117261 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 28 18:09:27 compute-0 nova_compute[189296]: 2025-11-28 18:09:27.106 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:28 compute-0 nova_compute[189296]: 2025-11-28 18:09:28.300 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:29 compute-0 podman[203494]: time="2025-11-28T18:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:09:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:09:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Nov 28 18:09:31 compute-0 podman[244843]: 2025-11-28 18:09:31.030538502 +0000 UTC m=+0.089310840 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 28 18:09:31 compute-0 podman[244844]: 2025-11-28 18:09:31.054334667 +0000 UTC m=+0.111749622 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 28 18:09:31 compute-0 openstack_network_exporter[205632]: ERROR   18:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:09:31 compute-0 openstack_network_exporter[205632]: ERROR   18:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:09:31 compute-0 openstack_network_exporter[205632]: ERROR   18:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:09:31 compute-0 openstack_network_exporter[205632]: ERROR   18:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:09:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:09:31 compute-0 openstack_network_exporter[205632]: ERROR   18:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:09:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:09:32 compute-0 nova_compute[189296]: 2025-11-28 18:09:32.108 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:33 compute-0 podman[244881]: 2025-11-28 18:09:33.024356558 +0000 UTC m=+0.072132292 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.tags=base rhel9, release=1214.1726694543, architecture=x86_64, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Nov 28 18:09:33 compute-0 podman[244880]: 2025-11-28 18:09:33.060856104 +0000 UTC m=+0.115507782 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 18:09:33 compute-0 nova_compute[189296]: 2025-11-28 18:09:33.302 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:36 compute-0 podman[244923]: 2025-11-28 18:09:36.067812554 +0000 UTC m=+0.120834098 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 28 18:09:37 compute-0 nova_compute[189296]: 2025-11-28 18:09:37.110 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:38 compute-0 nova_compute[189296]: 2025-11-28 18:09:38.305 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:42 compute-0 nova_compute[189296]: 2025-11-28 18:09:42.114 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:43 compute-0 nova_compute[189296]: 2025-11-28 18:09:43.308 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:45 compute-0 podman[244948]: 2025-11-28 18:09:45.040888677 +0000 UTC m=+0.101306894 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 18:09:47 compute-0 nova_compute[189296]: 2025-11-28 18:09:47.116 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:48 compute-0 nova_compute[189296]: 2025-11-28 18:09:48.311 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:52 compute-0 nova_compute[189296]: 2025-11-28 18:09:52.119 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:09:52.617 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:09:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:09:52.617 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:09:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:09:52.618 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:09:53 compute-0 nova_compute[189296]: 2025-11-28 18:09:53.321 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:56 compute-0 podman[244975]: 2025-11-28 18:09:56.022422364 +0000 UTC m=+0.071361034 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 28 18:09:56 compute-0 podman[244973]: 2025-11-28 18:09:56.025661232 +0000 UTC m=+0.086981444 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 28 18:09:56 compute-0 podman[244974]: 2025-11-28 18:09:56.050425448 +0000 UTC m=+0.106619310 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 18:09:57 compute-0 nova_compute[189296]: 2025-11-28 18:09:57.122 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:58 compute-0 nova_compute[189296]: 2025-11-28 18:09:58.322 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:09:59 compute-0 podman[203494]: time="2025-11-28T18:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:09:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:09:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Nov 28 18:10:01 compute-0 openstack_network_exporter[205632]: ERROR   18:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:10:01 compute-0 openstack_network_exporter[205632]: ERROR   18:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:10:01 compute-0 openstack_network_exporter[205632]: ERROR   18:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:10:01 compute-0 openstack_network_exporter[205632]: ERROR   18:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:10:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:10:01 compute-0 openstack_network_exporter[205632]: ERROR   18:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:10:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:10:02 compute-0 podman[245029]: 2025-11-28 18:10:02.016076782 +0000 UTC m=+0.069896689 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:10:02 compute-0 podman[245030]: 2025-11-28 18:10:02.0547732 +0000 UTC m=+0.086916492 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 28 18:10:02 compute-0 nova_compute[189296]: 2025-11-28 18:10:02.124 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:03 compute-0 nova_compute[189296]: 2025-11-28 18:10:03.326 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:04 compute-0 podman[245067]: 2025-11-28 18:10:04.001250483 +0000 UTC m=+0.058773035 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:10:04 compute-0 podman[245068]: 2025-11-28 18:10:04.017388197 +0000 UTC m=+0.071619471 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.expose-services=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Nov 28 18:10:04 compute-0 nova_compute[189296]: 2025-11-28 18:10:04.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:10:07 compute-0 podman[245108]: 2025-11-28 18:10:07.061845345 +0000 UTC m=+0.121015061 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 28 18:10:07 compute-0 nova_compute[189296]: 2025-11-28 18:10:07.126 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:07 compute-0 nova_compute[189296]: 2025-11-28 18:10:07.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:10:07 compute-0 nova_compute[189296]: 2025-11-28 18:10:07.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:10:07 compute-0 nova_compute[189296]: 2025-11-28 18:10:07.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:10:08 compute-0 nova_compute[189296]: 2025-11-28 18:10:08.170 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:10:08 compute-0 nova_compute[189296]: 2025-11-28 18:10:08.171 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:10:08 compute-0 nova_compute[189296]: 2025-11-28 18:10:08.171 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:10:08 compute-0 nova_compute[189296]: 2025-11-28 18:10:08.171 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:10:08 compute-0 nova_compute[189296]: 2025-11-28 18:10:08.328 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:09 compute-0 nova_compute[189296]: 2025-11-28 18:10:09.544 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:10:09 compute-0 nova_compute[189296]: 2025-11-28 18:10:09.559 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:10:09 compute-0 nova_compute[189296]: 2025-11-28 18:10:09.560 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:10:09 compute-0 nova_compute[189296]: 2025-11-28 18:10:09.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:10:09 compute-0 nova_compute[189296]: 2025-11-28 18:10:09.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:10:10 compute-0 nova_compute[189296]: 2025-11-28 18:10:10.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:10:10 compute-0 nova_compute[189296]: 2025-11-28 18:10:10.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:10:10 compute-0 nova_compute[189296]: 2025-11-28 18:10:10.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:10:10 compute-0 nova_compute[189296]: 2025-11-28 18:10:10.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:10:12 compute-0 nova_compute[189296]: 2025-11-28 18:10:12.128 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:12 compute-0 nova_compute[189296]: 2025-11-28 18:10:12.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:10:12 compute-0 nova_compute[189296]: 2025-11-28 18:10:12.669 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:10:12 compute-0 nova_compute[189296]: 2025-11-28 18:10:12.670 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:10:12 compute-0 nova_compute[189296]: 2025-11-28 18:10:12.671 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:10:12 compute-0 nova_compute[189296]: 2025-11-28 18:10:12.671 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:10:12 compute-0 nova_compute[189296]: 2025-11-28 18:10:12.975 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.035 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.036 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.097 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.098 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.165 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.166 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.224 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.231 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.286 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.287 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.330 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.365 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.366 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.425 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.426 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.485 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.784 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.786 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4896MB free_disk=72.33578491210938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.786 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.786 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.976 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.977 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 738e5649-3e79-434b-9fbe-4aff6d71b051 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.977 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:10:13 compute-0 nova_compute[189296]: 2025-11-28 18:10:13.978 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:10:14 compute-0 nova_compute[189296]: 2025-11-28 18:10:14.036 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:10:14 compute-0 nova_compute[189296]: 2025-11-28 18:10:14.054 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:10:14 compute-0 nova_compute[189296]: 2025-11-28 18:10:14.055 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:10:14 compute-0 nova_compute[189296]: 2025-11-28 18:10:14.056 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:10:16 compute-0 podman[245158]: 2025-11-28 18:10:16.002542131 +0000 UTC m=+0.067684577 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:10:17 compute-0 nova_compute[189296]: 2025-11-28 18:10:17.131 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:18 compute-0 nova_compute[189296]: 2025-11-28 18:10:18.332 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:19 compute-0 nova_compute[189296]: 2025-11-28 18:10:19.052 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:10:19 compute-0 nova_compute[189296]: 2025-11-28 18:10:19.077 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:10:22 compute-0 nova_compute[189296]: 2025-11-28 18:10:22.133 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:23 compute-0 nova_compute[189296]: 2025-11-28 18:10:23.335 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:27 compute-0 podman[245183]: 2025-11-28 18:10:27.040576178 +0000 UTC m=+0.098344204 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal)
Nov 28 18:10:27 compute-0 podman[245184]: 2025-11-28 18:10:27.049459748 +0000 UTC m=+0.104477609 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=f26160204c78771e78cdd2489258319b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 28 18:10:27 compute-0 podman[245185]: 2025-11-28 18:10:27.049505379 +0000 UTC m=+0.104957041 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:10:27 compute-0 nova_compute[189296]: 2025-11-28 18:10:27.135 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:28 compute-0 nova_compute[189296]: 2025-11-28 18:10:28.337 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:29 compute-0 podman[203494]: time="2025-11-28T18:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:10:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:10:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4779 "" "Go-http-client/1.1"
Nov 28 18:10:31 compute-0 openstack_network_exporter[205632]: ERROR   18:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:10:31 compute-0 openstack_network_exporter[205632]: ERROR   18:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:10:31 compute-0 openstack_network_exporter[205632]: ERROR   18:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:10:31 compute-0 openstack_network_exporter[205632]: ERROR   18:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:10:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:10:31 compute-0 openstack_network_exporter[205632]: ERROR   18:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:10:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:10:32 compute-0 nova_compute[189296]: 2025-11-28 18:10:32.137 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:33 compute-0 podman[245241]: 2025-11-28 18:10:33.020154392 +0000 UTC m=+0.076363183 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:10:33 compute-0 podman[245240]: 2025-11-28 18:10:33.030831384 +0000 UTC m=+0.092310830 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 28 18:10:33 compute-0 nova_compute[189296]: 2025-11-28 18:10:33.339 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:35 compute-0 podman[245276]: 2025-11-28 18:10:35.000425316 +0000 UTC m=+0.058827366 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:10:35 compute-0 podman[245277]: 2025-11-28 18:10:35.010530916 +0000 UTC m=+0.065605988 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, distribution-scope=public)
Nov 28 18:10:37 compute-0 nova_compute[189296]: 2025-11-28 18:10:37.140 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:38 compute-0 podman[245316]: 2025-11-28 18:10:38.086358649 +0000 UTC m=+0.148365991 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 28 18:10:38 compute-0 nova_compute[189296]: 2025-11-28 18:10:38.341 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:42 compute-0 nova_compute[189296]: 2025-11-28 18:10:42.142 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:43 compute-0 nova_compute[189296]: 2025-11-28 18:10:43.344 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:47 compute-0 podman[245340]: 2025-11-28 18:10:47.039041247 +0000 UTC m=+0.088263606 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:10:47 compute-0 nova_compute[189296]: 2025-11-28 18:10:47.146 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:48 compute-0 nova_compute[189296]: 2025-11-28 18:10:48.347 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.981 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.982 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.992 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '738e5649-3e79-434b-9fbe-4aff6d71b051', 'name': 'vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {'metering.server_group': 'ac6a0a76-f006-4c50-a4a8-904a1f128161'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.996 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d10f9fc-89ea-4059-8532-7e0aec0791d6', 'name': 'test_0', 'flavor': {'id': 'e125fa74-9e9f-47dc-8c8e-699980f99f10', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'f54c2688-82d2-4cd3-8c3b-96e774162948'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79ee04b003ca4eb8a045699c7852a8b0', 'user_id': '6a35450c34a344b1a4e63aae1be2b971', 'hostId': 'db9a2769e8f144ae30ff05291a20072f031ca2fe14565f94b8d8a651', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.996 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.996 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.997 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.997 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:51.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:10:51.997370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.041 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.042 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.042 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.081 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.081 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.082 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.082 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.082 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.082 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.083 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.083 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.083 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:10:52.083238) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 nova_compute[189296]: 2025-11-28 18:10:52.148 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.154 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.155 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.155 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.216 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.217 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.217 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.218 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.218 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.218 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.218 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.219 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.219 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.219 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 351803974 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:10:52.219078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.219 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 86546736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.220 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.latency volume: 62239108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.220 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 284678818 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.220 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 69824352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.220 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.latency volume: 37055244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.221 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.221 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.221 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.221 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.222 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.222 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:10:52.222091) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.226 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.230 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.231 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.231 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.231 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.231 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.232 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.232 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:10:52.232228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.260 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.280 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.281 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.281 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.281 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.282 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.282 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.282 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.282 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:10:52.282552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.283 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.283 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.284 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.284 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.285 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.286 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.286 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.286 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.286 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.286 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.287 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.287 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.287 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:10:52.287020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.288 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.288 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.288 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.289 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.289 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.289 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.290 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.290 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.290 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.290 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:10:52.290538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.291 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.291 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.292 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.292 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.292 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.292 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.292 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.293 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.293 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 951715343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.293 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 7967925 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.294 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:10:52.293274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.295 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 646402207 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.295 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 6041958 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.296 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.296 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.296 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.296 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.297 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.297 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.297 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.297 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.298 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.298 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.299 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:10:52.297439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.300 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.300 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.301 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.301 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.301 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.301 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.301 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.301 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.301 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.302 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:10:52.301812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.302 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.302 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.302 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.302 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.302 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.302 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.303 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.303 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.303 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.303 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.303 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.303 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.303 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.304 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.304 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/cpu volume: 37220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:10:52.302924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:10:52.304010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.304 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/cpu volume: 41090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.304 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.304 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.305 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.305 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.305 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.305 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.305 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.305 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.305 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.305 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:10:52.305404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.306 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.306 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.306 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.306 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.306 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:10:52.306275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.306 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.306 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.307 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.307 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.307 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.307 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.307 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.307 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.307 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.307 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.307 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.308 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.308 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.308 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.308 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.308 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.308 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.308 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:10:52.307312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.310 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.310 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:10:52.308086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.310 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.311 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.311 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:10:52.310946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.311 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.311 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.311 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.311 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.311 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.312 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.312 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.312 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.312 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.312 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.312 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.313 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.313 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.313 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.313 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.313 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.313 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.313 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.314 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:10:52.312034) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:10:52.313952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.314 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.314 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.314 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.314 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.314 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.314 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.315 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.315 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.315 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.315 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.315 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.incoming.bytes volume: 2472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.315 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.315 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.315 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.316 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.316 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.316 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.316 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.316 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.316 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.316 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.317 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.317 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.317 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.317 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.317 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:10:52.315173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:10:52.316230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.317 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:10:52.317387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.318 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.318 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.318 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.318 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.318 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.318 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.318 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.318 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:10:52.318517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.319 15 DEBUG ceilometer.compute.pollsters [-] 738e5649-3e79-434b-9fbe-4aff6d71b051/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.319 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.319 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.319 15 DEBUG ceilometer.compute.pollsters [-] 5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.320 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.322 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:10:52.323 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:10:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:10:52.618 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:10:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:10:52.619 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:10:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:10:52.619 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:10:53 compute-0 nova_compute[189296]: 2025-11-28 18:10:53.348 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:54 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 28 18:10:57 compute-0 nova_compute[189296]: 2025-11-28 18:10:57.153 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:58 compute-0 podman[245367]: 2025-11-28 18:10:58.027611501 +0000 UTC m=+0.090280805 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Nov 28 18:10:58 compute-0 podman[245368]: 2025-11-28 18:10:58.047553636 +0000 UTC m=+0.106066929 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 18:10:58 compute-0 podman[245369]: 2025-11-28 18:10:58.081576233 +0000 UTC m=+0.123390130 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:10:58 compute-0 nova_compute[189296]: 2025-11-28 18:10:58.350 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:10:59 compute-0 podman[203494]: time="2025-11-28T18:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:10:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:10:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Nov 28 18:11:01 compute-0 openstack_network_exporter[205632]: ERROR   18:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:11:01 compute-0 openstack_network_exporter[205632]: ERROR   18:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:11:01 compute-0 openstack_network_exporter[205632]: ERROR   18:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:11:01 compute-0 openstack_network_exporter[205632]: ERROR   18:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:11:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:11:01 compute-0 openstack_network_exporter[205632]: ERROR   18:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:11:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:11:02 compute-0 nova_compute[189296]: 2025-11-28 18:11:02.156 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:03 compute-0 nova_compute[189296]: 2025-11-28 18:11:03.351 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:04 compute-0 podman[245424]: 2025-11-28 18:11:04.015375629 +0000 UTC m=+0.081616574 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 18:11:04 compute-0 podman[245425]: 2025-11-28 18:11:04.02450576 +0000 UTC m=+0.087017775 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:11:06 compute-0 podman[245460]: 2025-11-28 18:11:06.021563047 +0000 UTC m=+0.074113232 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:11:06 compute-0 podman[245461]: 2025-11-28 18:11:06.047590889 +0000 UTC m=+0.087727062 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible)
Nov 28 18:11:06 compute-0 nova_compute[189296]: 2025-11-28 18:11:06.646 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:11:07 compute-0 nova_compute[189296]: 2025-11-28 18:11:07.158 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:08 compute-0 nova_compute[189296]: 2025-11-28 18:11:08.355 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:08 compute-0 nova_compute[189296]: 2025-11-28 18:11:08.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:11:08 compute-0 nova_compute[189296]: 2025-11-28 18:11:08.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:11:09 compute-0 podman[245503]: 2025-11-28 18:11:09.062894337 +0000 UTC m=+0.126608607 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 28 18:11:09 compute-0 nova_compute[189296]: 2025-11-28 18:11:09.310 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:11:09 compute-0 nova_compute[189296]: 2025-11-28 18:11:09.310 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:11:09 compute-0 nova_compute[189296]: 2025-11-28 18:11:09.310 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:11:11 compute-0 nova_compute[189296]: 2025-11-28 18:11:11.378 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Updating instance_info_cache with network_info: [{"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:11:11 compute-0 nova_compute[189296]: 2025-11-28 18:11:11.717 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:11:11 compute-0 nova_compute[189296]: 2025-11-28 18:11:11.717 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:11:11 compute-0 nova_compute[189296]: 2025-11-28 18:11:11.718 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:11:11 compute-0 nova_compute[189296]: 2025-11-28 18:11:11.719 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:11:11 compute-0 nova_compute[189296]: 2025-11-28 18:11:11.719 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:11:12 compute-0 nova_compute[189296]: 2025-11-28 18:11:12.160 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:12 compute-0 nova_compute[189296]: 2025-11-28 18:11:12.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:11:12 compute-0 nova_compute[189296]: 2025-11-28 18:11:12.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:11:12 compute-0 nova_compute[189296]: 2025-11-28 18:11:12.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:11:13 compute-0 nova_compute[189296]: 2025-11-28 18:11:13.358 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:13 compute-0 nova_compute[189296]: 2025-11-28 18:11:13.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.001 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.001 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.002 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.002 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.768 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.836 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.837 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.894 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.895 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.979 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:11:14 compute-0 nova_compute[189296]: 2025-11-28 18:11:14.980 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.051 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.064 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.133 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.135 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.191 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.192 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.248 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.249 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.305 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.621 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.622 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4844MB free_disk=72.33578491210938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.622 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.623 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.757 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.758 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 738e5649-3e79-434b-9fbe-4aff6d71b051 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.758 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.758 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.774 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing inventories for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.802 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating ProviderTree inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.803 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.816 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing aggregate associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.835 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing trait associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, traits: HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.900 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.990 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.992 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:11:15 compute-0 nova_compute[189296]: 2025-11-28 18:11:15.992 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:11:17 compute-0 nova_compute[189296]: 2025-11-28 18:11:17.164 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:17 compute-0 podman[245551]: 2025-11-28 18:11:17.994140593 +0000 UTC m=+0.057499948 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:11:18 compute-0 nova_compute[189296]: 2025-11-28 18:11:18.360 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:19 compute-0 nova_compute[189296]: 2025-11-28 18:11:19.993 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:11:22 compute-0 nova_compute[189296]: 2025-11-28 18:11:22.169 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:23 compute-0 nova_compute[189296]: 2025-11-28 18:11:23.362 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:27 compute-0 nova_compute[189296]: 2025-11-28 18:11:27.172 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:28 compute-0 nova_compute[189296]: 2025-11-28 18:11:28.364 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:29 compute-0 podman[245575]: 2025-11-28 18:11:29.0448372 +0000 UTC m=+0.103914795 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Nov 28 18:11:29 compute-0 podman[245576]: 2025-11-28 18:11:29.062601372 +0000 UTC m=+0.102110052 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 28 18:11:29 compute-0 podman[245582]: 2025-11-28 18:11:29.067363607 +0000 UTC m=+0.105970185 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 28 18:11:29 compute-0 podman[203494]: time="2025-11-28T18:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:11:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:11:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4780 "" "Go-http-client/1.1"
Nov 28 18:11:31 compute-0 openstack_network_exporter[205632]: ERROR   18:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:11:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:11:31 compute-0 openstack_network_exporter[205632]: ERROR   18:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:11:31 compute-0 openstack_network_exporter[205632]: ERROR   18:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:11:31 compute-0 openstack_network_exporter[205632]: ERROR   18:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:11:31 compute-0 openstack_network_exporter[205632]: ERROR   18:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:11:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:11:32 compute-0 nova_compute[189296]: 2025-11-28 18:11:32.174 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:33 compute-0 nova_compute[189296]: 2025-11-28 18:11:33.367 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:35 compute-0 podman[245633]: 2025-11-28 18:11:35.008795556 +0000 UTC m=+0.069281574 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 28 18:11:35 compute-0 podman[245634]: 2025-11-28 18:11:35.048180493 +0000 UTC m=+0.084898944 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:11:37 compute-0 podman[245671]: 2025-11-28 18:11:37.041766755 +0000 UTC m=+0.089693260 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, architecture=x86_64, config_id=edpm, release-0.7.12=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 28 18:11:37 compute-0 podman[245670]: 2025-11-28 18:11:37.045510976 +0000 UTC m=+0.110776353 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:11:37 compute-0 nova_compute[189296]: 2025-11-28 18:11:37.176 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:38 compute-0 nova_compute[189296]: 2025-11-28 18:11:38.371 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:40 compute-0 podman[245713]: 2025-11-28 18:11:40.106955399 +0000 UTC m=+0.167525411 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 18:11:42 compute-0 nova_compute[189296]: 2025-11-28 18:11:42.178 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:43 compute-0 nova_compute[189296]: 2025-11-28 18:11:43.374 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:47 compute-0 nova_compute[189296]: 2025-11-28 18:11:47.181 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:48 compute-0 nova_compute[189296]: 2025-11-28 18:11:48.375 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:48 compute-0 podman[245739]: 2025-11-28 18:11:48.996239047 +0000 UTC m=+0.056217347 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 18:11:52 compute-0 nova_compute[189296]: 2025-11-28 18:11:52.183 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:11:52.620 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:11:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:11:52.621 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:11:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:11:52.622 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:11:53 compute-0 nova_compute[189296]: 2025-11-28 18:11:53.378 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:57 compute-0 nova_compute[189296]: 2025-11-28 18:11:57.186 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:58 compute-0 nova_compute[189296]: 2025-11-28 18:11:58.379 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:11:59 compute-0 podman[203494]: time="2025-11-28T18:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:11:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:11:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Nov 28 18:12:00 compute-0 podman[245762]: 2025-11-28 18:12:00.050416274 +0000 UTC m=+0.097129260 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 28 18:12:00 compute-0 podman[245764]: 2025-11-28 18:12:00.060901619 +0000 UTC m=+0.089848254 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 28 18:12:00 compute-0 podman[245763]: 2025-11-28 18:12:00.07367851 +0000 UTC m=+0.115194690 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:12:01 compute-0 openstack_network_exporter[205632]: ERROR   18:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:12:01 compute-0 openstack_network_exporter[205632]: ERROR   18:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:12:01 compute-0 openstack_network_exporter[205632]: ERROR   18:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:12:01 compute-0 openstack_network_exporter[205632]: ERROR   18:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:12:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:12:01 compute-0 openstack_network_exporter[205632]: ERROR   18:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:12:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:12:02 compute-0 nova_compute[189296]: 2025-11-28 18:12:02.188 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:03 compute-0 nova_compute[189296]: 2025-11-28 18:12:03.381 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:06 compute-0 podman[245820]: 2025-11-28 18:12:06.048979169 +0000 UTC m=+0.104103050 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 18:12:06 compute-0 podman[245819]: 2025-11-28 18:12:06.072975752 +0000 UTC m=+0.132775946 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 28 18:12:06 compute-0 nova_compute[189296]: 2025-11-28 18:12:06.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:07 compute-0 nova_compute[189296]: 2025-11-28 18:12:07.191 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:08 compute-0 podman[245857]: 2025-11-28 18:12:08.02979571 +0000 UTC m=+0.073764173 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:12:08 compute-0 podman[245858]: 2025-11-28 18:12:08.087286587 +0000 UTC m=+0.117682720 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 28 18:12:08 compute-0 nova_compute[189296]: 2025-11-28 18:12:08.383 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:08 compute-0 nova_compute[189296]: 2025-11-28 18:12:08.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:08 compute-0 nova_compute[189296]: 2025-11-28 18:12:08.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:12:08 compute-0 nova_compute[189296]: 2025-11-28 18:12:08.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:12:09 compute-0 nova_compute[189296]: 2025-11-28 18:12:09.327 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:12:09 compute-0 nova_compute[189296]: 2025-11-28 18:12:09.328 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:12:09 compute-0 nova_compute[189296]: 2025-11-28 18:12:09.328 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:12:09 compute-0 nova_compute[189296]: 2025-11-28 18:12:09.328 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:12:11 compute-0 podman[245898]: 2025-11-28 18:12:11.076356943 +0000 UTC m=+0.124829434 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:12:11 compute-0 nova_compute[189296]: 2025-11-28 18:12:11.707 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [{"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:12:12 compute-0 nova_compute[189296]: 2025-11-28 18:12:12.091 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-5d10f9fc-89ea-4059-8532-7e0aec0791d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:12:12 compute-0 nova_compute[189296]: 2025-11-28 18:12:12.092 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:12:12 compute-0 nova_compute[189296]: 2025-11-28 18:12:12.094 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:12 compute-0 nova_compute[189296]: 2025-11-28 18:12:12.194 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:12 compute-0 nova_compute[189296]: 2025-11-28 18:12:12.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:12 compute-0 nova_compute[189296]: 2025-11-28 18:12:12.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:12 compute-0 nova_compute[189296]: 2025-11-28 18:12:12.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:12 compute-0 nova_compute[189296]: 2025-11-28 18:12:12.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:12:13 compute-0 nova_compute[189296]: 2025-11-28 18:12:13.384 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:13 compute-0 nova_compute[189296]: 2025-11-28 18:12:13.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.679 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.679 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.680 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.680 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.774 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.833 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.834 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.896 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.897 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.959 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:12:14 compute-0 nova_compute[189296]: 2025-11-28 18:12:14.960 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.014 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.020 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.106 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.108 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.178 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.179 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.258 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.260 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.322 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.651 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.653 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4835MB free_disk=72.33578491210938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.653 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.653 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.740 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.741 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 738e5649-3e79-434b-9fbe-4aff6d71b051 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.741 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.742 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.812 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.824 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.826 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:12:15 compute-0 nova_compute[189296]: 2025-11-28 18:12:15.826 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.171 189300 DEBUG oslo_concurrency.lockutils [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "738e5649-3e79-434b-9fbe-4aff6d71b051" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.172 189300 DEBUG oslo_concurrency.lockutils [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.173 189300 DEBUG oslo_concurrency.lockutils [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.174 189300 DEBUG oslo_concurrency.lockutils [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.175 189300 DEBUG oslo_concurrency.lockutils [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.177 189300 INFO nova.compute.manager [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Terminating instance#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.180 189300 DEBUG nova.compute.manager [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.198 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 kernel: tapd9985197-6a (unregistering): left promiscuous mode
Nov 28 18:12:17 compute-0 NetworkManager[56307]: <info>  [1764353537.2314] device (tapd9985197-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:12:17 compute-0 ovn_controller[97771]: 2025-11-28T18:12:17Z|00065|binding|INFO|Releasing lport d9985197-6aa0-4811-a620-ee1b4aa74e74 from this chassis (sb_readonly=0)
Nov 28 18:12:17 compute-0 ovn_controller[97771]: 2025-11-28T18:12:17Z|00066|binding|INFO|Setting lport d9985197-6aa0-4811-a620-ee1b4aa74e74 down in Southbound
Nov 28 18:12:17 compute-0 ovn_controller[97771]: 2025-11-28T18:12:17Z|00067|binding|INFO|Removing iface tapd9985197-6a ovn-installed in OVS
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.244 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.248 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.253 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:e2:d6 192.168.0.35'], port_security=['fa:16:3e:5c:e2:d6 192.168.0.35'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-po7lv7knpyto-cwp5r5rzhumi-q43femobqz35-port-uyqu37nujs2e', 'neutron:cidrs': '192.168.0.35/24', 'neutron:device_id': '738e5649-3e79-434b-9fbe-4aff6d71b051', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-po7lv7knpyto-cwp5r5rzhumi-q43femobqz35-port-uyqu37nujs2e', 'neutron:project_id': '79ee04b003ca4eb8a045699c7852a8b0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a309e23b-efb6-4377-8050-5a658324ee07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.208', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37710b57-0bdd-4c1a-aa8d-366aa83fbf51, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=d9985197-6aa0-4811-a620-ee1b4aa74e74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.256 106624 INFO neutron.agent.ovn.metadata.agent [-] Port d9985197-6aa0-4811-a620-ee1b4aa74e74 in datapath 5cc11a5f-7338-49fd-ba02-2db7ff676c4f unbound from our chassis#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.258 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5cc11a5f-7338-49fd-ba02-2db7ff676c4f#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.262 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.285 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[43d834fa-5901-4c12-a918-e060eb6b8284]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:17 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 28 18:12:17 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 1min 30.359s CPU time.
Nov 28 18:12:17 compute-0 systemd-machined[155703]: Machine qemu-5-instance-00000005 terminated.
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.331 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[3799ceab-5dd0-4d78-b6fd-a7e8ab048f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.337 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[c683b51a-464e-4506-9df1-7dda3c13b93a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.377 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[0942e016-6a8c-4eff-b6a0-3efb70ce155b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.404 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[809f6600-1512-4501-a692-8409bf3d46f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5cc11a5f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:38:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 19, 'rx_bytes': 532, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 19, 'rx_bytes': 532, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370971, 'reachable_time': 40340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245963, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.414 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.420 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.428 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2a143e-3547-43ea-9c85-6da66bb00ea6]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370983, 'tstamp': 370983}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245966, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5cc11a5f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 370986, 'tstamp': 370986}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245966, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.430 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc11a5f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.432 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.438 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.439 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cc11a5f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.439 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.440 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5cc11a5f-70, col_values=(('external_ids', {'iface-id': '467e3797-177d-4174-b963-0efbd15595b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.440 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.501 189300 INFO nova.virt.libvirt.driver [-] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Instance destroyed successfully.#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.501 189300 DEBUG nova.objects.instance [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'resources' on Instance uuid 738e5649-3e79-434b-9fbe-4aff6d71b051 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.513 189300 DEBUG nova.virt.libvirt.vif [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:04:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7knpyto-cwp5r5rzhumi-q43femobqz35-vnf-twxbbv63dycu',id=5,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:04:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='ac6a0a76-f006-4c50-a4a8-904a1f128161'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-al0gs0f7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:04:19Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2NzMwMDA4NzA2MTE1MDI4MjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDY3MzAwMDg3MDYxMTUwMjgyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2NzMwMDA4NzA2MTE1MDI4MjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 28 18:12:17 compute-0 nova_compute[189296]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDY3MzAwMDg3MDYxMTUwMjgyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA2NzMwMDA4NzA2MTE1MDI4MjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNjczMDAwODcwNjExNTAyODIwPT0tLQo=',user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=738e5649-3e79-434b-9fbe-4aff6d71b051,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.513 189300 DEBUG nova.network.os_vif_util [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.514 189300 DEBUG nova.network.os_vif_util [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:e2:d6,bridge_name='br-int',has_traffic_filtering=True,id=d9985197-6aa0-4811-a620-ee1b4aa74e74,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd9985197-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.514 189300 DEBUG os_vif [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:e2:d6,bridge_name='br-int',has_traffic_filtering=True,id=d9985197-6aa0-4811-a620-ee1b4aa74e74,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd9985197-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.515 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.516 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9985197-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.517 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.518 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.520 189300 DEBUG nova.compute.manager [req-135bc46f-c521-448b-a631-0e777797e73f req-eb2914f0-469c-4e49-90e6-9a2e866b5759 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Received event network-vif-unplugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.520 189300 DEBUG oslo_concurrency.lockutils [req-135bc46f-c521-448b-a631-0e777797e73f req-eb2914f0-469c-4e49-90e6-9a2e866b5759 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.521 189300 DEBUG oslo_concurrency.lockutils [req-135bc46f-c521-448b-a631-0e777797e73f req-eb2914f0-469c-4e49-90e6-9a2e866b5759 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.521 189300 DEBUG oslo_concurrency.lockutils [req-135bc46f-c521-448b-a631-0e777797e73f req-eb2914f0-469c-4e49-90e6-9a2e866b5759 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.521 189300 DEBUG nova.compute.manager [req-135bc46f-c521-448b-a631-0e777797e73f req-eb2914f0-469c-4e49-90e6-9a2e866b5759 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] No waiting events found dispatching network-vif-unplugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.521 189300 DEBUG nova.compute.manager [req-135bc46f-c521-448b-a631-0e777797e73f req-eb2914f0-469c-4e49-90e6-9a2e866b5759 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Received event network-vif-unplugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.521 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.525 189300 INFO os_vif [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:e2:d6,bridge_name='br-int',has_traffic_filtering=True,id=d9985197-6aa0-4811-a620-ee1b4aa74e74,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd9985197-6a')#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.526 189300 INFO nova.virt.libvirt.driver [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Deleting instance files /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051_del#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.527 189300 INFO nova.virt.libvirt.driver [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Deletion of /var/lib/nova/instances/738e5649-3e79-434b-9fbe-4aff6d71b051_del complete#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.582 189300 INFO nova.compute.manager [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.583 189300 DEBUG oslo.service.loopingcall [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.583 189300 DEBUG nova.compute.manager [-] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.583 189300 DEBUG nova.network.neutron [-] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.645 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:12:17 compute-0 nova_compute[189296]: 2025-11-28 18:12:17.646 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:17.646 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:12:17 compute-0 rsyslogd[236416]: message too long (8192) with configured size 8096, begin of message is: 2025-11-28 18:12:17.513 189300 DEBUG nova.virt.libvirt.vif [None req-893fa043-e3 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.387 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.482 189300 DEBUG nova.compute.manager [req-af2e03fe-ca6e-4228-9940-626f2a7df693 req-9d8bbb94-d571-4345-a744-bd18410764b9 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Received event network-changed-d9985197-6aa0-4811-a620-ee1b4aa74e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.483 189300 DEBUG nova.compute.manager [req-af2e03fe-ca6e-4228-9940-626f2a7df693 req-9d8bbb94-d571-4345-a744-bd18410764b9 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Refreshing instance network info cache due to event network-changed-d9985197-6aa0-4811-a620-ee1b4aa74e74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.483 189300 DEBUG oslo_concurrency.lockutils [req-af2e03fe-ca6e-4228-9940-626f2a7df693 req-9d8bbb94-d571-4345-a744-bd18410764b9 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.484 189300 DEBUG oslo_concurrency.lockutils [req-af2e03fe-ca6e-4228-9940-626f2a7df693 req-9d8bbb94-d571-4345-a744-bd18410764b9 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.484 189300 DEBUG nova.network.neutron [req-af2e03fe-ca6e-4228-9940-626f2a7df693 req-9d8bbb94-d571-4345-a744-bd18410764b9 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Refreshing network info cache for port d9985197-6aa0-4811-a620-ee1b4aa74e74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.826 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.845 189300 DEBUG nova.network.neutron [-] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.863 189300 INFO nova.compute.manager [-] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Took 1.28 seconds to deallocate network for instance.#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.908 189300 DEBUG oslo_concurrency.lockutils [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.909 189300 DEBUG oslo_concurrency.lockutils [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:18 compute-0 nova_compute[189296]: 2025-11-28 18:12:18.988 189300 DEBUG nova.compute.provider_tree [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:12:19 compute-0 nova_compute[189296]: 2025-11-28 18:12:19.009 189300 DEBUG nova.scheduler.client.report [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:12:19 compute-0 nova_compute[189296]: 2025-11-28 18:12:19.036 189300 DEBUG oslo_concurrency.lockutils [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:19 compute-0 nova_compute[189296]: 2025-11-28 18:12:19.085 189300 INFO nova.scheduler.client.report [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Deleted allocations for instance 738e5649-3e79-434b-9fbe-4aff6d71b051#033[00m
Nov 28 18:12:19 compute-0 nova_compute[189296]: 2025-11-28 18:12:19.181 189300 DEBUG oslo_concurrency.lockutils [None req-893fa043-e33d-409b-813a-fe0b5f8975a9 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:19 compute-0 podman[245986]: 2025-11-28 18:12:19.286867205 +0000 UTC m=+0.074370687 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:12:19 compute-0 nova_compute[189296]: 2025-11-28 18:12:19.605 189300 DEBUG nova.compute.manager [req-a2121d69-d109-4385-8d2b-37142b0ed196 req-f64b8533-c1e1-45a7-b7ea-038d8c10eef5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Received event network-vif-plugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:12:19 compute-0 nova_compute[189296]: 2025-11-28 18:12:19.606 189300 DEBUG oslo_concurrency.lockutils [req-a2121d69-d109-4385-8d2b-37142b0ed196 req-f64b8533-c1e1-45a7-b7ea-038d8c10eef5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:19 compute-0 nova_compute[189296]: 2025-11-28 18:12:19.607 189300 DEBUG oslo_concurrency.lockutils [req-a2121d69-d109-4385-8d2b-37142b0ed196 req-f64b8533-c1e1-45a7-b7ea-038d8c10eef5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:19 compute-0 nova_compute[189296]: 2025-11-28 18:12:19.608 189300 DEBUG oslo_concurrency.lockutils [req-a2121d69-d109-4385-8d2b-37142b0ed196 req-f64b8533-c1e1-45a7-b7ea-038d8c10eef5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "738e5649-3e79-434b-9fbe-4aff6d71b051-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:19 compute-0 nova_compute[189296]: 2025-11-28 18:12:19.609 189300 DEBUG nova.compute.manager [req-a2121d69-d109-4385-8d2b-37142b0ed196 req-f64b8533-c1e1-45a7-b7ea-038d8c10eef5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] No waiting events found dispatching network-vif-plugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:12:19 compute-0 nova_compute[189296]: 2025-11-28 18:12:19.610 189300 WARNING nova.compute.manager [req-a2121d69-d109-4385-8d2b-37142b0ed196 req-f64b8533-c1e1-45a7-b7ea-038d8c10eef5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Received unexpected event network-vif-plugged-d9985197-6aa0-4811-a620-ee1b4aa74e74 for instance with vm_state deleted and task_state None.#033[00m
Nov 28 18:12:20 compute-0 nova_compute[189296]: 2025-11-28 18:12:20.354 189300 DEBUG nova.network.neutron [req-af2e03fe-ca6e-4228-9940-626f2a7df693 req-9d8bbb94-d571-4345-a744-bd18410764b9 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Updated VIF entry in instance network info cache for port d9985197-6aa0-4811-a620-ee1b4aa74e74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:12:20 compute-0 nova_compute[189296]: 2025-11-28 18:12:20.354 189300 DEBUG nova.network.neutron [req-af2e03fe-ca6e-4228-9940-626f2a7df693 req-9d8bbb94-d571-4345-a744-bd18410764b9 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Updating instance_info_cache with network_info: [{"id": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "address": "fa:16:3e:5c:e2:d6", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.35", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9985197-6a", "ovs_interfaceid": "d9985197-6aa0-4811-a620-ee1b4aa74e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:12:20 compute-0 nova_compute[189296]: 2025-11-28 18:12:20.368 189300 DEBUG oslo_concurrency.lockutils [req-af2e03fe-ca6e-4228-9940-626f2a7df693 req-9d8bbb94-d571-4345-a744-bd18410764b9 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-738e5649-3e79-434b-9fbe-4aff6d71b051" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:12:22 compute-0 nova_compute[189296]: 2025-11-28 18:12:22.518 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:22 compute-0 nova_compute[189296]: 2025-11-28 18:12:22.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:23 compute-0 nova_compute[189296]: 2025-11-28 18:12:23.390 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:27 compute-0 nova_compute[189296]: 2025-11-28 18:12:27.521 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:27 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:27.648 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:12:28 compute-0 nova_compute[189296]: 2025-11-28 18:12:28.392 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:30 compute-0 podman[203494]: time="2025-11-28T18:12:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:12:30 compute-0 podman[203494]: @ - - [28/Nov/2025:18:12:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Nov 28 18:12:30 compute-0 podman[203494]: @ - - [28/Nov/2025:18:12:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Nov 28 18:12:31 compute-0 podman[246010]: 2025-11-28 18:12:31.0603021 +0000 UTC m=+0.105220247 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=)
Nov 28 18:12:31 compute-0 podman[246011]: 2025-11-28 18:12:31.073618533 +0000 UTC m=+0.114949314 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 28 18:12:31 compute-0 podman[246012]: 2025-11-28 18:12:31.081986166 +0000 UTC m=+0.129411634 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:12:31 compute-0 openstack_network_exporter[205632]: ERROR   18:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:12:31 compute-0 openstack_network_exporter[205632]: ERROR   18:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:12:31 compute-0 openstack_network_exporter[205632]: ERROR   18:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:12:31 compute-0 openstack_network_exporter[205632]: ERROR   18:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:12:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:12:31 compute-0 openstack_network_exporter[205632]: ERROR   18:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:12:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:12:32 compute-0 nova_compute[189296]: 2025-11-28 18:12:32.499 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764353537.497768, 738e5649-3e79-434b-9fbe-4aff6d71b051 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:12:32 compute-0 nova_compute[189296]: 2025-11-28 18:12:32.500 189300 INFO nova.compute.manager [-] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:12:32 compute-0 nova_compute[189296]: 2025-11-28 18:12:32.525 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:32 compute-0 nova_compute[189296]: 2025-11-28 18:12:32.532 189300 DEBUG nova.compute.manager [None req-0bb7172e-c785-4c67-a565-1797633ff031 - - - - - -] [instance: 738e5649-3e79-434b-9fbe-4aff6d71b051] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:12:33 compute-0 nova_compute[189296]: 2025-11-28 18:12:33.396 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.591 189300 DEBUG oslo_concurrency.lockutils [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.592 189300 DEBUG oslo_concurrency.lockutils [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.592 189300 DEBUG oslo_concurrency.lockutils [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.593 189300 DEBUG oslo_concurrency.lockutils [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.593 189300 DEBUG oslo_concurrency.lockutils [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.594 189300 INFO nova.compute.manager [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Terminating instance#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.595 189300 DEBUG nova.compute.manager [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:12:35 compute-0 kernel: tap0e0a227a-62 (unregistering): left promiscuous mode
Nov 28 18:12:35 compute-0 NetworkManager[56307]: <info>  [1764353555.6347] device (tap0e0a227a-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.649 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:35 compute-0 ovn_controller[97771]: 2025-11-28T18:12:35Z|00068|binding|INFO|Releasing lport 0e0a227a-6212-4496-8954-fe210b763d0b from this chassis (sb_readonly=0)
Nov 28 18:12:35 compute-0 ovn_controller[97771]: 2025-11-28T18:12:35Z|00069|binding|INFO|Setting lport 0e0a227a-6212-4496-8954-fe210b763d0b down in Southbound
Nov 28 18:12:35 compute-0 ovn_controller[97771]: 2025-11-28T18:12:35Z|00070|binding|INFO|Removing iface tap0e0a227a-62 ovn-installed in OVS
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.652 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:35.660 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:42:00 192.168.0.67'], port_security=['fa:16:3e:28:42:00 192.168.0.67'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.67/24', 'neutron:device_id': '5d10f9fc-89ea-4059-8532-7e0aec0791d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79ee04b003ca4eb8a045699c7852a8b0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a309e23b-efb6-4377-8050-5a658324ee07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37710b57-0bdd-4c1a-aa8d-366aa83fbf51, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=0e0a227a-6212-4496-8954-fe210b763d0b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:12:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:35.661 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 0e0a227a-6212-4496-8954-fe210b763d0b in datapath 5cc11a5f-7338-49fd-ba02-2db7ff676c4f unbound from our chassis#033[00m
Nov 28 18:12:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:35.661 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5cc11a5f-7338-49fd-ba02-2db7ff676c4f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:12:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:35.662 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d184ad02-c812-4b34-8b82-456581e45998]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:35.663 106624 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f namespace which is not needed anymore#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.670 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:35 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 28 18:12:35 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 27.358s CPU time.
Nov 28 18:12:35 compute-0 systemd-machined[155703]: Machine qemu-1-instance-00000001 terminated.
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.819 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:35 compute-0 neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f[239001]: [NOTICE]   (239006) : haproxy version is 2.8.14-c23fe91
Nov 28 18:12:35 compute-0 neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f[239001]: [NOTICE]   (239006) : path to executable is /usr/sbin/haproxy
Nov 28 18:12:35 compute-0 neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f[239001]: [WARNING]  (239006) : Exiting Master process...
Nov 28 18:12:35 compute-0 neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f[239001]: [ALERT]    (239006) : Current worker (239008) exited with code 143 (Terminated)
Nov 28 18:12:35 compute-0 neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f[239001]: [WARNING]  (239006) : All workers exited. Exiting... (0)
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.825 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:35 compute-0 systemd[1]: libpod-7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb.scope: Deactivated successfully.
Nov 28 18:12:35 compute-0 podman[246086]: 2025-11-28 18:12:35.834549869 +0000 UTC m=+0.059070737 container died 7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.850 189300 DEBUG nova.compute.manager [req-d187d0a7-2c45-4b7d-a9b4-9d9fedfa8a98 req-b248618c-3e33-4dba-ad9d-897404862531 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Received event network-vif-unplugged-0e0a227a-6212-4496-8954-fe210b763d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.851 189300 DEBUG oslo_concurrency.lockutils [req-d187d0a7-2c45-4b7d-a9b4-9d9fedfa8a98 req-b248618c-3e33-4dba-ad9d-897404862531 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.851 189300 DEBUG oslo_concurrency.lockutils [req-d187d0a7-2c45-4b7d-a9b4-9d9fedfa8a98 req-b248618c-3e33-4dba-ad9d-897404862531 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.852 189300 DEBUG oslo_concurrency.lockutils [req-d187d0a7-2c45-4b7d-a9b4-9d9fedfa8a98 req-b248618c-3e33-4dba-ad9d-897404862531 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.852 189300 DEBUG nova.compute.manager [req-d187d0a7-2c45-4b7d-a9b4-9d9fedfa8a98 req-b248618c-3e33-4dba-ad9d-897404862531 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] No waiting events found dispatching network-vif-unplugged-0e0a227a-6212-4496-8954-fe210b763d0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.852 189300 DEBUG nova.compute.manager [req-d187d0a7-2c45-4b7d-a9b4-9d9fedfa8a98 req-b248618c-3e33-4dba-ad9d-897404862531 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Received event network-vif-unplugged-0e0a227a-6212-4496-8954-fe210b763d0b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb-userdata-shm.mount: Deactivated successfully.
Nov 28 18:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b9de74ec40611e6e4d55216482e0f148c142adf9c9083d7b2b1f5ff871de056-merged.mount: Deactivated successfully.
Nov 28 18:12:35 compute-0 podman[246086]: 2025-11-28 18:12:35.887408023 +0000 UTC m=+0.111928891 container cleanup 7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.893 189300 INFO nova.virt.libvirt.driver [-] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Instance destroyed successfully.#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.894 189300 DEBUG nova.objects.instance [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lazy-loading 'resources' on Instance uuid 5d10f9fc-89ea-4059-8532-7e0aec0791d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.910 189300 DEBUG nova.virt.libvirt.vif [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T17:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-28T17:56:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='79ee04b003ca4eb8a045699c7852a8b0',ramdisk_id='',reservation_id='r-a3s2pmkm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='f54c2688-82d2-4cd3-8c3b-96e774162948',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T17:56:02Z,user_data=None,user_id='6a35450c34a344b1a4e63aae1be2b971',uuid=5d10f9fc-89ea-4059-8532-7e0aec0791d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.910 189300 DEBUG nova.network.os_vif_util [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converting VIF {"id": "0e0a227a-6212-4496-8954-fe210b763d0b", "address": "fa:16:3e:28:42:00", "network": {"id": "5cc11a5f-7338-49fd-ba02-2db7ff676c4f", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79ee04b003ca4eb8a045699c7852a8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e0a227a-62", "ovs_interfaceid": "0e0a227a-6212-4496-8954-fe210b763d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.911 189300 DEBUG nova.network.os_vif_util [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:42:00,bridge_name='br-int',has_traffic_filtering=True,id=0e0a227a-6212-4496-8954-fe210b763d0b,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e0a227a-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:12:35 compute-0 systemd[1]: libpod-conmon-7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb.scope: Deactivated successfully.
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.912 189300 DEBUG os_vif [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:42:00,bridge_name='br-int',has_traffic_filtering=True,id=0e0a227a-6212-4496-8954-fe210b763d0b,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e0a227a-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.914 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.915 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e0a227a-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.917 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.918 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.920 189300 INFO os_vif [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:42:00,bridge_name='br-int',has_traffic_filtering=True,id=0e0a227a-6212-4496-8954-fe210b763d0b,network=Network(5cc11a5f-7338-49fd-ba02-2db7ff676c4f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e0a227a-62')#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.921 189300 INFO nova.virt.libvirt.driver [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Deleting instance files /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6_del#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.922 189300 INFO nova.virt.libvirt.driver [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Deletion of /var/lib/nova/instances/5d10f9fc-89ea-4059-8532-7e0aec0791d6_del complete#033[00m
Nov 28 18:12:35 compute-0 podman[246135]: 2025-11-28 18:12:35.962253501 +0000 UTC m=+0.045940197 container remove 7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 28 18:12:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:35.968 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[5821734d-2794-42a5-8ece-ec8f0434afe7]: (4, ('Fri Nov 28 06:12:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f (7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb)\n7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb\nFri Nov 28 06:12:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f (7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb)\n7ef4d31e8a49646b5a8298d104069287aa28ac253e071a5106da21f1fdf30eeb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:35.971 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e38fc0a8-856f-41ac-9fa3-9003cf42df37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.972 189300 INFO nova.compute.manager [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:12:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:35.973 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cc11a5f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.973 189300 DEBUG oslo.service.loopingcall [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.974 189300 DEBUG nova.compute.manager [-] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.974 189300 DEBUG nova.network.neutron [-] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:12:35 compute-0 kernel: tap5cc11a5f-70: left promiscuous mode
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.981 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:35.981 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[ad18dcf6-de4f-43cd-bf35-82b5ba25f6ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:35 compute-0 nova_compute[189296]: 2025-11-28 18:12:35.997 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:36 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:36.002 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[2299a902-897c-4087-81b5-d8f22b5b5f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:36 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:36.003 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[f0508a50-c90f-4e2e-b019-6e9a7ac4ddda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:36 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:36.017 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[0b37a828-c65a-4f4a-a757-e887d0e42f43]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 370960, 'reachable_time': 21007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246149, 'error': None, 'target': 'ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d5cc11a5f\x2d7338\x2d49fd\x2dba02\x2d2db7ff676c4f.mount: Deactivated successfully.
Nov 28 18:12:36 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:36.027 106734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5cc11a5f-7338-49fd-ba02-2db7ff676c4f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 28 18:12:36 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:36.028 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[83ea54dd-21ff-46de-a031-a708dee86256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:12:36 compute-0 nova_compute[189296]: 2025-11-28 18:12:36.723 189300 DEBUG nova.network.neutron [-] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:12:36 compute-0 nova_compute[189296]: 2025-11-28 18:12:36.747 189300 INFO nova.compute.manager [-] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Took 0.77 seconds to deallocate network for instance.#033[00m
Nov 28 18:12:36 compute-0 nova_compute[189296]: 2025-11-28 18:12:36.801 189300 DEBUG oslo_concurrency.lockutils [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:36 compute-0 nova_compute[189296]: 2025-11-28 18:12:36.803 189300 DEBUG oslo_concurrency.lockutils [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:36 compute-0 nova_compute[189296]: 2025-11-28 18:12:36.862 189300 DEBUG nova.compute.provider_tree [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:12:36 compute-0 nova_compute[189296]: 2025-11-28 18:12:36.878 189300 DEBUG nova.scheduler.client.report [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:12:36 compute-0 nova_compute[189296]: 2025-11-28 18:12:36.902 189300 DEBUG oslo_concurrency.lockutils [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:36 compute-0 nova_compute[189296]: 2025-11-28 18:12:36.935 189300 INFO nova.scheduler.client.report [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Deleted allocations for instance 5d10f9fc-89ea-4059-8532-7e0aec0791d6#033[00m
Nov 28 18:12:37 compute-0 nova_compute[189296]: 2025-11-28 18:12:37.005 189300 DEBUG oslo_concurrency.lockutils [None req-f0ee0c1d-9ce5-4198-bec1-14ace062a5a6 6a35450c34a344b1a4e63aae1be2b971 79ee04b003ca4eb8a045699c7852a8b0 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:37 compute-0 podman[246152]: 2025-11-28 18:12:37.011655914 +0000 UTC m=+0.075628138 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 28 18:12:37 compute-0 podman[246151]: 2025-11-28 18:12:37.02752375 +0000 UTC m=+0.092710744 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 28 18:12:37 compute-0 nova_compute[189296]: 2025-11-28 18:12:37.947 189300 DEBUG nova.compute.manager [req-9d16f255-3bac-4486-96c3-fd0e768fdd59 req-4528f176-0f22-48c5-a10c-f30f414ba2cf 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Received event network-vif-plugged-0e0a227a-6212-4496-8954-fe210b763d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:12:37 compute-0 nova_compute[189296]: 2025-11-28 18:12:37.948 189300 DEBUG oslo_concurrency.lockutils [req-9d16f255-3bac-4486-96c3-fd0e768fdd59 req-4528f176-0f22-48c5-a10c-f30f414ba2cf 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:37 compute-0 nova_compute[189296]: 2025-11-28 18:12:37.949 189300 DEBUG oslo_concurrency.lockutils [req-9d16f255-3bac-4486-96c3-fd0e768fdd59 req-4528f176-0f22-48c5-a10c-f30f414ba2cf 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:37 compute-0 nova_compute[189296]: 2025-11-28 18:12:37.949 189300 DEBUG oslo_concurrency.lockutils [req-9d16f255-3bac-4486-96c3-fd0e768fdd59 req-4528f176-0f22-48c5-a10c-f30f414ba2cf 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5d10f9fc-89ea-4059-8532-7e0aec0791d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:37 compute-0 nova_compute[189296]: 2025-11-28 18:12:37.949 189300 DEBUG nova.compute.manager [req-9d16f255-3bac-4486-96c3-fd0e768fdd59 req-4528f176-0f22-48c5-a10c-f30f414ba2cf 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] No waiting events found dispatching network-vif-plugged-0e0a227a-6212-4496-8954-fe210b763d0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:12:37 compute-0 nova_compute[189296]: 2025-11-28 18:12:37.950 189300 WARNING nova.compute.manager [req-9d16f255-3bac-4486-96c3-fd0e768fdd59 req-4528f176-0f22-48c5-a10c-f30f414ba2cf 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Received unexpected event network-vif-plugged-0e0a227a-6212-4496-8954-fe210b763d0b for instance with vm_state deleted and task_state None.#033[00m
Nov 28 18:12:37 compute-0 nova_compute[189296]: 2025-11-28 18:12:37.950 189300 DEBUG nova.compute.manager [req-9d16f255-3bac-4486-96c3-fd0e768fdd59 req-4528f176-0f22-48c5-a10c-f30f414ba2cf 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Received event network-vif-deleted-0e0a227a-6212-4496-8954-fe210b763d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:12:38 compute-0 nova_compute[189296]: 2025-11-28 18:12:38.401 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:39 compute-0 podman[246190]: 2025-11-28 18:12:39.034509287 +0000 UTC m=+0.091748010 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4, maintainer=Red Hat, Inc.)
Nov 28 18:12:39 compute-0 podman[246189]: 2025-11-28 18:12:39.061926774 +0000 UTC m=+0.117531297 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:12:40 compute-0 nova_compute[189296]: 2025-11-28 18:12:40.920 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:42 compute-0 podman[246229]: 2025-11-28 18:12:42.100779578 +0000 UTC m=+0.151924563 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 28 18:12:43 compute-0 nova_compute[189296]: 2025-11-28 18:12:43.402 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:45 compute-0 nova_compute[189296]: 2025-11-28 18:12:45.925 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:48 compute-0 nova_compute[189296]: 2025-11-28 18:12:48.405 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:50 compute-0 podman[246256]: 2025-11-28 18:12:50.066692378 +0000 UTC m=+0.114329608 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:12:50 compute-0 nova_compute[189296]: 2025-11-28 18:12:50.891 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764353555.88973, 5d10f9fc-89ea-4059-8532-7e0aec0791d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:12:50 compute-0 nova_compute[189296]: 2025-11-28 18:12:50.891 189300 INFO nova.compute.manager [-] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:12:50 compute-0 nova_compute[189296]: 2025-11-28 18:12:50.917 189300 DEBUG nova.compute.manager [None req-92f63ccc-2a51-47a3-89f6-1f3b6cad30b4 - - - - - -] [instance: 5d10f9fc-89ea-4059-8532-7e0aec0791d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:12:50 compute-0 nova_compute[189296]: 2025-11-28 18:12:50.929 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.981 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.982 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.985 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.985 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.985 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.985 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.985 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.986 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.986 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.986 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.986 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.986 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.986 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.986 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.986 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.986 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.986 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.987 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.987 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.987 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.987 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.987 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.987 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.987 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.987 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.987 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.987 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.988 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 rsyslogd[236416]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.988 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.988 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.989 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.989 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.989 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.989 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.989 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.989 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.990 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.990 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.990 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.991 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.991 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.991 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.991 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.991 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.992 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.994 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.994 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.994 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.994 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.994 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:12:51.994 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:12:52 compute-0 rsyslogd[236416]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 28 18:12:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:52.622 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:12:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:52.623 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:12:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:12:52.623 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:12:53 compute-0 nova_compute[189296]: 2025-11-28 18:12:53.407 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:55 compute-0 nova_compute[189296]: 2025-11-28 18:12:55.933 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:58 compute-0 nova_compute[189296]: 2025-11-28 18:12:58.410 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:12:59 compute-0 nova_compute[189296]: 2025-11-28 18:12:59.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:12:59 compute-0 podman[203494]: time="2025-11-28T18:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:12:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:12:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4310 "" "Go-http-client/1.1"
Nov 28 18:13:00 compute-0 nova_compute[189296]: 2025-11-28 18:13:00.939 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:01 compute-0 openstack_network_exporter[205632]: ERROR   18:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:13:01 compute-0 openstack_network_exporter[205632]: ERROR   18:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:13:01 compute-0 openstack_network_exporter[205632]: ERROR   18:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:13:01 compute-0 openstack_network_exporter[205632]: ERROR   18:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:13:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:13:01 compute-0 openstack_network_exporter[205632]: ERROR   18:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:13:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:13:02 compute-0 podman[246281]: 2025-11-28 18:13:02.033188307 +0000 UTC m=+0.083939040 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 18:13:02 compute-0 podman[246282]: 2025-11-28 18:13:02.05142107 +0000 UTC m=+0.087322742 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:13:02 compute-0 podman[246283]: 2025-11-28 18:13:02.05512778 +0000 UTC m=+0.082957217 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 28 18:13:03 compute-0 nova_compute[189296]: 2025-11-28 18:13:03.412 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:05 compute-0 nova_compute[189296]: 2025-11-28 18:13:05.647 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:05 compute-0 nova_compute[189296]: 2025-11-28 18:13:05.648 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 28 18:13:05 compute-0 nova_compute[189296]: 2025-11-28 18:13:05.943 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:06 compute-0 ovn_controller[97771]: 2025-11-28T18:13:06Z|00071|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Nov 28 18:13:08 compute-0 podman[246338]: 2025-11-28 18:13:08.039426512 +0000 UTC m=+0.087342753 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:13:08 compute-0 podman[246337]: 2025-11-28 18:13:08.039519495 +0000 UTC m=+0.084681599 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 28 18:13:08 compute-0 nova_compute[189296]: 2025-11-28 18:13:08.414 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:08 compute-0 nova_compute[189296]: 2025-11-28 18:13:08.635 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:10 compute-0 podman[246376]: 2025-11-28 18:13:10.043494038 +0000 UTC m=+0.102589223 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:13:10 compute-0 podman[246377]: 2025-11-28 18:13:10.043809686 +0000 UTC m=+0.100287617 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, config_id=edpm, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Nov 28 18:13:10 compute-0 nova_compute[189296]: 2025-11-28 18:13:10.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:10 compute-0 nova_compute[189296]: 2025-11-28 18:13:10.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:13:10 compute-0 nova_compute[189296]: 2025-11-28 18:13:10.650 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 18:13:10 compute-0 nova_compute[189296]: 2025-11-28 18:13:10.948 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:12 compute-0 nova_compute[189296]: 2025-11-28 18:13:12.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:12 compute-0 nova_compute[189296]: 2025-11-28 18:13:12.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 28 18:13:12 compute-0 nova_compute[189296]: 2025-11-28 18:13:12.643 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 28 18:13:13 compute-0 podman[246415]: 2025-11-28 18:13:13.086641136 +0000 UTC m=+0.149637485 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 28 18:13:13 compute-0 nova_compute[189296]: 2025-11-28 18:13:13.417 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:13 compute-0 nova_compute[189296]: 2025-11-28 18:13:13.644 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:13 compute-0 nova_compute[189296]: 2025-11-28 18:13:13.644 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:13 compute-0 nova_compute[189296]: 2025-11-28 18:13:13.645 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:13 compute-0 nova_compute[189296]: 2025-11-28 18:13:13.645 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:13 compute-0 nova_compute[189296]: 2025-11-28 18:13:13.645 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:13:14 compute-0 nova_compute[189296]: 2025-11-28 18:13:14.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:15 compute-0 nova_compute[189296]: 2025-11-28 18:13:15.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:15 compute-0 nova_compute[189296]: 2025-11-28 18:13:15.663 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:13:15 compute-0 nova_compute[189296]: 2025-11-28 18:13:15.663 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:13:15 compute-0 nova_compute[189296]: 2025-11-28 18:13:15.663 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:13:15 compute-0 nova_compute[189296]: 2025-11-28 18:13:15.663 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:13:15 compute-0 nova_compute[189296]: 2025-11-28 18:13:15.952 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:16 compute-0 nova_compute[189296]: 2025-11-28 18:13:16.090 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:13:16 compute-0 nova_compute[189296]: 2025-11-28 18:13:16.091 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5379MB free_disk=72.38045120239258GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:13:16 compute-0 nova_compute[189296]: 2025-11-28 18:13:16.092 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:13:16 compute-0 nova_compute[189296]: 2025-11-28 18:13:16.092 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:13:16 compute-0 nova_compute[189296]: 2025-11-28 18:13:16.272 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:13:16 compute-0 nova_compute[189296]: 2025-11-28 18:13:16.273 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:13:16 compute-0 nova_compute[189296]: 2025-11-28 18:13:16.354 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:13:16 compute-0 nova_compute[189296]: 2025-11-28 18:13:16.368 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:13:16 compute-0 nova_compute[189296]: 2025-11-28 18:13:16.387 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:13:16 compute-0 nova_compute[189296]: 2025-11-28 18:13:16.388 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:13:18 compute-0 nova_compute[189296]: 2025-11-28 18:13:18.420 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:19 compute-0 nova_compute[189296]: 2025-11-28 18:13:19.390 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:13:20 compute-0 nova_compute[189296]: 2025-11-28 18:13:20.956 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:21 compute-0 podman[246443]: 2025-11-28 18:13:21.062940009 +0000 UTC m=+0.115192139 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:13:23 compute-0 nova_compute[189296]: 2025-11-28 18:13:23.422 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:25 compute-0 nova_compute[189296]: 2025-11-28 18:13:25.961 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:28 compute-0 nova_compute[189296]: 2025-11-28 18:13:28.424 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:30 compute-0 podman[203494]: time="2025-11-28T18:13:30Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:13:30 compute-0 podman[203494]: @ - - [28/Nov/2025:18:13:30 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:13:30 compute-0 podman[203494]: @ - - [28/Nov/2025:18:13:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4312 "" "Go-http-client/1.1"
Nov 28 18:13:30 compute-0 nova_compute[189296]: 2025-11-28 18:13:30.964 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:31 compute-0 openstack_network_exporter[205632]: ERROR   18:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:13:31 compute-0 openstack_network_exporter[205632]: ERROR   18:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:13:31 compute-0 openstack_network_exporter[205632]: ERROR   18:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:13:31 compute-0 openstack_network_exporter[205632]: ERROR   18:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:13:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:13:31 compute-0 openstack_network_exporter[205632]: ERROR   18:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:13:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:13:33 compute-0 podman[246467]: 2025-11-28 18:13:33.032319289 +0000 UTC m=+0.094391994 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7)
Nov 28 18:13:33 compute-0 podman[246469]: 2025-11-28 18:13:33.040136319 +0000 UTC m=+0.097330566 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:13:33 compute-0 podman[246468]: 2025-11-28 18:13:33.063519436 +0000 UTC m=+0.119949894 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm)
Nov 28 18:13:33 compute-0 nova_compute[189296]: 2025-11-28 18:13:33.427 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:35 compute-0 nova_compute[189296]: 2025-11-28 18:13:35.968 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:38 compute-0 nova_compute[189296]: 2025-11-28 18:13:38.429 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:39 compute-0 podman[246526]: 2025-11-28 18:13:39.033461238 +0000 UTC m=+0.083740816 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 28 18:13:39 compute-0 podman[246527]: 2025-11-28 18:13:39.043636375 +0000 UTC m=+0.082420444 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 28 18:13:40 compute-0 nova_compute[189296]: 2025-11-28 18:13:40.973 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:41 compute-0 podman[246560]: 2025-11-28 18:13:41.0583725 +0000 UTC m=+0.105062373 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:13:41 compute-0 podman[246561]: 2025-11-28 18:13:41.106823178 +0000 UTC m=+0.146456680 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Nov 28 18:13:43 compute-0 nova_compute[189296]: 2025-11-28 18:13:43.435 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:44 compute-0 podman[246602]: 2025-11-28 18:13:44.045578141 +0000 UTC m=+0.107347479 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 28 18:13:45 compute-0 nova_compute[189296]: 2025-11-28 18:13:45.979 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:48 compute-0 nova_compute[189296]: 2025-11-28 18:13:48.435 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:50 compute-0 nova_compute[189296]: 2025-11-28 18:13:50.983 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:52 compute-0 podman[246629]: 2025-11-28 18:13:52.032492972 +0000 UTC m=+0.089607638 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 18:13:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:13:52.623 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:13:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:13:52.623 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:13:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:13:52.623 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:13:53 compute-0 nova_compute[189296]: 2025-11-28 18:13:53.438 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:55 compute-0 nova_compute[189296]: 2025-11-28 18:13:55.988 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:58 compute-0 nova_compute[189296]: 2025-11-28 18:13:58.439 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:13:59 compute-0 podman[203494]: time="2025-11-28T18:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:13:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:13:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4318 "" "Go-http-client/1.1"
Nov 28 18:14:00 compute-0 nova_compute[189296]: 2025-11-28 18:14:00.992 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:01 compute-0 openstack_network_exporter[205632]: ERROR   18:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:14:01 compute-0 openstack_network_exporter[205632]: ERROR   18:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:14:01 compute-0 openstack_network_exporter[205632]: ERROR   18:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:14:01 compute-0 openstack_network_exporter[205632]: ERROR   18:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:14:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:14:01 compute-0 openstack_network_exporter[205632]: ERROR   18:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:14:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:14:03 compute-0 nova_compute[189296]: 2025-11-28 18:14:03.441 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:04 compute-0 podman[246654]: 2025-11-28 18:14:04.031586323 +0000 UTC m=+0.094437905 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container)
Nov 28 18:14:04 compute-0 podman[246655]: 2025-11-28 18:14:04.044859716 +0000 UTC m=+0.086762349 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:14:04 compute-0 podman[246661]: 2025-11-28 18:14:04.081630129 +0000 UTC m=+0.117409574 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Nov 28 18:14:05 compute-0 nova_compute[189296]: 2025-11-28 18:14:05.998 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:08 compute-0 nova_compute[189296]: 2025-11-28 18:14:08.443 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:08 compute-0 nova_compute[189296]: 2025-11-28 18:14:08.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:14:10 compute-0 podman[246709]: 2025-11-28 18:14:10.033468051 +0000 UTC m=+0.081498652 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:14:10 compute-0 podman[246710]: 2025-11-28 18:14:10.080076503 +0000 UTC m=+0.120596361 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:14:11 compute-0 nova_compute[189296]: 2025-11-28 18:14:11.002 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:11 compute-0 nova_compute[189296]: 2025-11-28 18:14:11.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:14:11 compute-0 nova_compute[189296]: 2025-11-28 18:14:11.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:14:11 compute-0 nova_compute[189296]: 2025-11-28 18:14:11.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:14:11 compute-0 nova_compute[189296]: 2025-11-28 18:14:11.650 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 18:14:12 compute-0 podman[246747]: 2025-11-28 18:14:11.999695197 +0000 UTC m=+0.059644600 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 18:14:12 compute-0 podman[246748]: 2025-11-28 18:14:12.016419124 +0000 UTC m=+0.075931706 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, name=ubi9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64)
Nov 28 18:14:13 compute-0 nova_compute[189296]: 2025-11-28 18:14:13.446 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:13 compute-0 nova_compute[189296]: 2025-11-28 18:14:13.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:14:13 compute-0 nova_compute[189296]: 2025-11-28 18:14:13.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:14:13 compute-0 nova_compute[189296]: 2025-11-28 18:14:13.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:14:14 compute-0 podman[246788]: 2025-11-28 18:14:14.784829588 +0000 UTC m=+0.108041335 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 28 18:14:15 compute-0 nova_compute[189296]: 2025-11-28 18:14:15.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:14:15 compute-0 nova_compute[189296]: 2025-11-28 18:14:15.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:14:16 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.005 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:16 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:14:16 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:14:16 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.667 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:14:16 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.668 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:14:16 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.669 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:14:16 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.669 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:14:16 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.998 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:14:16 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.999 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5379MB free_disk=72.38032913208008GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:14:16 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.999 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:14:17 compute-0 nova_compute[189296]: 2025-11-28 18:14:16.999 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:14:17 compute-0 nova_compute[189296]: 2025-11-28 18:14:17.054 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:14:17 compute-0 nova_compute[189296]: 2025-11-28 18:14:17.054 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:14:17 compute-0 nova_compute[189296]: 2025-11-28 18:14:17.131 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:14:17 compute-0 nova_compute[189296]: 2025-11-28 18:14:17.147 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:14:17 compute-0 nova_compute[189296]: 2025-11-28 18:14:17.151 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:14:17 compute-0 nova_compute[189296]: 2025-11-28 18:14:17.151 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:14:18 compute-0 nova_compute[189296]: 2025-11-28 18:14:18.449 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:21 compute-0 nova_compute[189296]: 2025-11-28 18:14:21.009 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:21 compute-0 nova_compute[189296]: 2025-11-28 18:14:21.153 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:14:23 compute-0 podman[246814]: 2025-11-28 18:14:23.018907543 +0000 UTC m=+0.069913329 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:14:23 compute-0 nova_compute[189296]: 2025-11-28 18:14:23.452 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:24 compute-0 nova_compute[189296]: 2025-11-28 18:14:24.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:14:26 compute-0 nova_compute[189296]: 2025-11-28 18:14:26.013 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:28 compute-0 nova_compute[189296]: 2025-11-28 18:14:28.454 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:29 compute-0 podman[203494]: time="2025-11-28T18:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:14:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:14:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4313 "" "Go-http-client/1.1"
Nov 28 18:14:31 compute-0 nova_compute[189296]: 2025-11-28 18:14:31.018 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:31 compute-0 openstack_network_exporter[205632]: ERROR   18:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:14:31 compute-0 openstack_network_exporter[205632]: ERROR   18:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:14:31 compute-0 openstack_network_exporter[205632]: ERROR   18:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:14:31 compute-0 openstack_network_exporter[205632]: ERROR   18:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:14:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:14:31 compute-0 openstack_network_exporter[205632]: ERROR   18:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:14:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:14:33 compute-0 nova_compute[189296]: 2025-11-28 18:14:33.456 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:35 compute-0 podman[246839]: 2025-11-28 18:14:35.063435269 +0000 UTC m=+0.124901125 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 28 18:14:35 compute-0 podman[246838]: 2025-11-28 18:14:35.070260395 +0000 UTC m=+0.122208599 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.)
Nov 28 18:14:35 compute-0 podman[246840]: 2025-11-28 18:14:35.078583427 +0000 UTC m=+0.121827310 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 28 18:14:36 compute-0 nova_compute[189296]: 2025-11-28 18:14:36.021 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:38 compute-0 nova_compute[189296]: 2025-11-28 18:14:38.459 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:41 compute-0 nova_compute[189296]: 2025-11-28 18:14:41.026 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:41 compute-0 podman[246895]: 2025-11-28 18:14:41.070562334 +0000 UTC m=+0.116261415 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 28 18:14:41 compute-0 podman[246896]: 2025-11-28 18:14:41.080802953 +0000 UTC m=+0.117482896 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 18:14:43 compute-0 podman[246928]: 2025-11-28 18:14:43.023946849 +0000 UTC m=+0.081140764 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:14:43 compute-0 podman[246929]: 2025-11-28 18:14:43.065923449 +0000 UTC m=+0.105443374 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0)
Nov 28 18:14:43 compute-0 nova_compute[189296]: 2025-11-28 18:14:43.464 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:45 compute-0 podman[246970]: 2025-11-28 18:14:45.052308805 +0000 UTC m=+0.109633835 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:14:46 compute-0 nova_compute[189296]: 2025-11-28 18:14:46.029 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:48 compute-0 nova_compute[189296]: 2025-11-28 18:14:48.466 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:51 compute-0 nova_compute[189296]: 2025-11-28 18:14:51.035 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.982 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.982 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.982 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.983 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.983 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.987 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.988 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.988 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.989 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.989 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.989 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.990 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.991 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.991 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.991 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.991 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.991 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.992 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.992 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.992 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.992 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.992 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.993 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.993 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.993 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.993 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.993 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.993 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.993 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.993 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.994 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.994 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.994 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.994 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.994 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.994 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.994 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.994 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.994 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.994 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.995 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.995 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.995 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.995 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.995 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.995 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.995 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.995 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.998 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.998 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.998 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.998 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.998 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.998 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:14:51.998 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:14:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:14:52.624 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:14:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:14:52.625 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:14:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:14:52.626 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:14:53 compute-0 nova_compute[189296]: 2025-11-28 18:14:53.468 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:54 compute-0 podman[246999]: 2025-11-28 18:14:54.053798404 +0000 UTC m=+0.096427653 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:14:56 compute-0 nova_compute[189296]: 2025-11-28 18:14:56.039 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:58 compute-0 nova_compute[189296]: 2025-11-28 18:14:58.471 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:14:59 compute-0 podman[203494]: time="2025-11-28T18:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:14:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:14:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4320 "" "Go-http-client/1.1"
Nov 28 18:15:01 compute-0 nova_compute[189296]: 2025-11-28 18:15:01.042 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:01 compute-0 openstack_network_exporter[205632]: ERROR   18:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:15:01 compute-0 openstack_network_exporter[205632]: ERROR   18:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:15:01 compute-0 openstack_network_exporter[205632]: ERROR   18:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:15:01 compute-0 openstack_network_exporter[205632]: ERROR   18:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:15:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:15:01 compute-0 openstack_network_exporter[205632]: ERROR   18:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:15:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:15:03 compute-0 nova_compute[189296]: 2025-11-28 18:15:03.473 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:06 compute-0 podman[247024]: 2025-11-28 18:15:06.045714061 +0000 UTC m=+0.089147656 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 28 18:15:06 compute-0 nova_compute[189296]: 2025-11-28 18:15:06.047 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:06 compute-0 podman[247023]: 2025-11-28 18:15:06.066286511 +0000 UTC m=+0.106530619 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 28 18:15:06 compute-0 podman[247022]: 2025-11-28 18:15:06.105480003 +0000 UTC m=+0.155735274 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, release=1755695350)
Nov 28 18:15:08 compute-0 nova_compute[189296]: 2025-11-28 18:15:08.475 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:08 compute-0 nova_compute[189296]: 2025-11-28 18:15:08.640 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:15:11 compute-0 nova_compute[189296]: 2025-11-28 18:15:11.052 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:12 compute-0 podman[247079]: 2025-11-28 18:15:12.04806172 +0000 UTC m=+0.107679558 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:15:12 compute-0 podman[247080]: 2025-11-28 18:15:12.096846915 +0000 UTC m=+0.137561473 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi)
Nov 28 18:15:12 compute-0 nova_compute[189296]: 2025-11-28 18:15:12.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:15:12 compute-0 nova_compute[189296]: 2025-11-28 18:15:12.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:15:12 compute-0 nova_compute[189296]: 2025-11-28 18:15:12.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:15:12 compute-0 nova_compute[189296]: 2025-11-28 18:15:12.643 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 18:15:13 compute-0 nova_compute[189296]: 2025-11-28 18:15:13.478 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:13 compute-0 nova_compute[189296]: 2025-11-28 18:15:13.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:15:13 compute-0 nova_compute[189296]: 2025-11-28 18:15:13.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:15:14 compute-0 podman[247117]: 2025-11-28 18:15:14.004206942 +0000 UTC m=+0.066897676 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:15:14 compute-0 podman[247118]: 2025-11-28 18:15:14.063465272 +0000 UTC m=+0.118901980 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, architecture=x86_64, config_id=edpm, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, release-0.7.12=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, container_name=kepler)
Nov 28 18:15:14 compute-0 nova_compute[189296]: 2025-11-28 18:15:14.631 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:15:16 compute-0 podman[247159]: 2025-11-28 18:15:16.039261631 +0000 UTC m=+0.093762259 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:15:16 compute-0 nova_compute[189296]: 2025-11-28 18:15:16.053 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.661 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.663 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.950 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.951 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5371MB free_disk=72.3803482055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.951 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:15:17 compute-0 nova_compute[189296]: 2025-11-28 18:15:17.951 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:15:18 compute-0 nova_compute[189296]: 2025-11-28 18:15:18.036 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:15:18 compute-0 nova_compute[189296]: 2025-11-28 18:15:18.037 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:15:18 compute-0 nova_compute[189296]: 2025-11-28 18:15:18.066 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:15:18 compute-0 nova_compute[189296]: 2025-11-28 18:15:18.085 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:15:18 compute-0 nova_compute[189296]: 2025-11-28 18:15:18.086 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:15:18 compute-0 nova_compute[189296]: 2025-11-28 18:15:18.087 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:15:18 compute-0 nova_compute[189296]: 2025-11-28 18:15:18.483 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:19 compute-0 nova_compute[189296]: 2025-11-28 18:15:19.087 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:15:20 compute-0 nova_compute[189296]: 2025-11-28 18:15:20.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:15:21 compute-0 nova_compute[189296]: 2025-11-28 18:15:21.058 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:23 compute-0 nova_compute[189296]: 2025-11-28 18:15:23.486 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:25 compute-0 podman[247186]: 2025-11-28 18:15:25.011765554 +0000 UTC m=+0.062216296 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:15:26 compute-0 nova_compute[189296]: 2025-11-28 18:15:26.062 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:28 compute-0 nova_compute[189296]: 2025-11-28 18:15:28.488 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:29 compute-0 podman[203494]: time="2025-11-28T18:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:15:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:15:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4317 "" "Go-http-client/1.1"
Nov 28 18:15:31 compute-0 nova_compute[189296]: 2025-11-28 18:15:31.067 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:31 compute-0 openstack_network_exporter[205632]: ERROR   18:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:15:31 compute-0 openstack_network_exporter[205632]: ERROR   18:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:15:31 compute-0 openstack_network_exporter[205632]: ERROR   18:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:15:31 compute-0 openstack_network_exporter[205632]: ERROR   18:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:15:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:15:31 compute-0 openstack_network_exporter[205632]: ERROR   18:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:15:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:15:33 compute-0 nova_compute[189296]: 2025-11-28 18:15:33.491 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:36 compute-0 nova_compute[189296]: 2025-11-28 18:15:36.073 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:37 compute-0 podman[247210]: 2025-11-28 18:15:37.010412458 +0000 UTC m=+0.068797198 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 18:15:37 compute-0 podman[247211]: 2025-11-28 18:15:37.011070253 +0000 UTC m=+0.065795984 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 28 18:15:37 compute-0 podman[247212]: 2025-11-28 18:15:37.050524081 +0000 UTC m=+0.090175622 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 28 18:15:38 compute-0 nova_compute[189296]: 2025-11-28 18:15:38.495 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:41 compute-0 nova_compute[189296]: 2025-11-28 18:15:41.073 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:43 compute-0 podman[247268]: 2025-11-28 18:15:43.033798656 +0000 UTC m=+0.083440278 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:15:43 compute-0 podman[247269]: 2025-11-28 18:15:43.038404748 +0000 UTC m=+0.086975353 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:15:43 compute-0 nova_compute[189296]: 2025-11-28 18:15:43.499 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:44 compute-0 podman[247304]: 2025-11-28 18:15:44.743281125 +0000 UTC m=+0.071487134 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:15:44 compute-0 podman[247305]: 2025-11-28 18:15:44.801359598 +0000 UTC m=+0.122676328 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_id=edpm)
Nov 28 18:15:46 compute-0 nova_compute[189296]: 2025-11-28 18:15:46.076 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:47 compute-0 podman[247347]: 2025-11-28 18:15:47.090919158 +0000 UTC m=+0.149714400 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 28 18:15:48 compute-0 nova_compute[189296]: 2025-11-28 18:15:48.503 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:51 compute-0 nova_compute[189296]: 2025-11-28 18:15:51.082 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:15:52.624 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:15:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:15:52.625 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:15:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:15:52.625 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:15:53 compute-0 nova_compute[189296]: 2025-11-28 18:15:53.505 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:56 compute-0 podman[247374]: 2025-11-28 18:15:56.013367548 +0000 UTC m=+0.066971313 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:15:56 compute-0 nova_compute[189296]: 2025-11-28 18:15:56.086 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:58 compute-0 nova_compute[189296]: 2025-11-28 18:15:58.508 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:15:59 compute-0 podman[203494]: time="2025-11-28T18:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:15:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:15:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4322 "" "Go-http-client/1.1"
Nov 28 18:16:01 compute-0 nova_compute[189296]: 2025-11-28 18:16:01.089 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:01 compute-0 openstack_network_exporter[205632]: ERROR   18:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:16:01 compute-0 openstack_network_exporter[205632]: ERROR   18:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:16:01 compute-0 openstack_network_exporter[205632]: ERROR   18:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:16:01 compute-0 openstack_network_exporter[205632]: ERROR   18:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:16:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:16:01 compute-0 openstack_network_exporter[205632]: ERROR   18:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:16:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:16:03 compute-0 nova_compute[189296]: 2025-11-28 18:16:03.510 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:06 compute-0 nova_compute[189296]: 2025-11-28 18:16:06.094 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:08 compute-0 podman[247399]: 2025-11-28 18:16:08.039088586 +0000 UTC m=+0.081641242 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:16:08 compute-0 podman[247398]: 2025-11-28 18:16:08.047255256 +0000 UTC m=+0.096731732 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 28 18:16:08 compute-0 podman[247397]: 2025-11-28 18:16:08.060910071 +0000 UTC m=+0.119896101 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container)
Nov 28 18:16:08 compute-0 nova_compute[189296]: 2025-11-28 18:16:08.512 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:08 compute-0 nova_compute[189296]: 2025-11-28 18:16:08.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:16:11 compute-0 nova_compute[189296]: 2025-11-28 18:16:11.099 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:13 compute-0 nova_compute[189296]: 2025-11-28 18:16:13.514 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:13 compute-0 nova_compute[189296]: 2025-11-28 18:16:13.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:16:13 compute-0 nova_compute[189296]: 2025-11-28 18:16:13.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:16:13 compute-0 nova_compute[189296]: 2025-11-28 18:16:13.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:16:13 compute-0 nova_compute[189296]: 2025-11-28 18:16:13.640 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 18:16:14 compute-0 podman[247455]: 2025-11-28 18:16:14.049843854 +0000 UTC m=+0.092669862 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:16:14 compute-0 podman[247454]: 2025-11-28 18:16:14.068054941 +0000 UTC m=+0.119700886 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 28 18:16:14 compute-0 nova_compute[189296]: 2025-11-28 18:16:14.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:16:15 compute-0 podman[247492]: 2025-11-28 18:16:15.045850862 +0000 UTC m=+0.103015926 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:16:15 compute-0 podman[247493]: 2025-11-28 18:16:15.051793258 +0000 UTC m=+0.103869478 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, release-0.7.12=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc.)
Nov 28 18:16:15 compute-0 nova_compute[189296]: 2025-11-28 18:16:15.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:16:15 compute-0 nova_compute[189296]: 2025-11-28 18:16:15.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:16:16 compute-0 nova_compute[189296]: 2025-11-28 18:16:16.105 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:17 compute-0 nova_compute[189296]: 2025-11-28 18:16:17.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:16:17 compute-0 nova_compute[189296]: 2025-11-28 18:16:17.705 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:16:17 compute-0 nova_compute[189296]: 2025-11-28 18:16:17.705 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:16:17 compute-0 nova_compute[189296]: 2025-11-28 18:16:17.706 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:16:17 compute-0 nova_compute[189296]: 2025-11-28 18:16:17.706 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:16:18 compute-0 podman[247533]: 2025-11-28 18:16:18.093091927 +0000 UTC m=+0.154111479 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.104 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.106 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5378MB free_disk=72.38032913208008GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.106 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.106 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.168 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.169 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.182 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing inventories for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.199 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating ProviderTree inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.200 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.225 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing aggregate associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.298 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing trait associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, traits: HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.344 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.363 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.366 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.367 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:16:18 compute-0 nova_compute[189296]: 2025-11-28 18:16:18.518 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:19 compute-0 nova_compute[189296]: 2025-11-28 18:16:19.368 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:16:19 compute-0 nova_compute[189296]: 2025-11-28 18:16:19.369 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:16:20 compute-0 nova_compute[189296]: 2025-11-28 18:16:20.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:16:21 compute-0 nova_compute[189296]: 2025-11-28 18:16:21.107 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:22 compute-0 nova_compute[189296]: 2025-11-28 18:16:22.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:16:23 compute-0 nova_compute[189296]: 2025-11-28 18:16:23.521 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:24 compute-0 nova_compute[189296]: 2025-11-28 18:16:24.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:16:26 compute-0 nova_compute[189296]: 2025-11-28 18:16:26.111 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:27 compute-0 podman[247557]: 2025-11-28 18:16:27.013610839 +0000 UTC m=+0.076330713 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:16:28 compute-0 nova_compute[189296]: 2025-11-28 18:16:28.525 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:29 compute-0 podman[203494]: time="2025-11-28T18:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:16:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:16:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4314 "" "Go-http-client/1.1"
Nov 28 18:16:31 compute-0 nova_compute[189296]: 2025-11-28 18:16:31.115 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:31 compute-0 openstack_network_exporter[205632]: ERROR   18:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:16:31 compute-0 openstack_network_exporter[205632]: ERROR   18:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:16:31 compute-0 openstack_network_exporter[205632]: ERROR   18:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:16:31 compute-0 openstack_network_exporter[205632]: ERROR   18:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:16:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:16:31 compute-0 openstack_network_exporter[205632]: ERROR   18:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:16:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:16:33 compute-0 nova_compute[189296]: 2025-11-28 18:16:33.527 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:16:34.555 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:16:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:16:34.556 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:16:34 compute-0 nova_compute[189296]: 2025-11-28 18:16:34.559 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:36 compute-0 nova_compute[189296]: 2025-11-28 18:16:36.119 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:38 compute-0 nova_compute[189296]: 2025-11-28 18:16:38.530 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:38 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:16:38.559 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:16:39 compute-0 podman[247582]: 2025-11-28 18:16:39.064701889 +0000 UTC m=+0.094275412 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 18:16:39 compute-0 podman[247580]: 2025-11-28 18:16:39.076681382 +0000 UTC m=+0.119719766 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, container_name=openstack_network_exporter, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Nov 28 18:16:39 compute-0 podman[247581]: 2025-11-28 18:16:39.085012496 +0000 UTC m=+0.122338530 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm)
Nov 28 18:16:41 compute-0 nova_compute[189296]: 2025-11-28 18:16:41.124 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:43 compute-0 nova_compute[189296]: 2025-11-28 18:16:43.532 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:44 compute-0 podman[247633]: 2025-11-28 18:16:44.7587004 +0000 UTC m=+0.088988333 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 28 18:16:44 compute-0 podman[247632]: 2025-11-28 18:16:44.76155363 +0000 UTC m=+0.072858827 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 28 18:16:46 compute-0 podman[247669]: 2025-11-28 18:16:46.029587056 +0000 UTC m=+0.082652108 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0)
Nov 28 18:16:46 compute-0 podman[247668]: 2025-11-28 18:16:46.047236969 +0000 UTC m=+0.108804539 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 18:16:46 compute-0 nova_compute[189296]: 2025-11-28 18:16:46.129 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:48 compute-0 nova_compute[189296]: 2025-11-28 18:16:48.537 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:49 compute-0 podman[247709]: 2025-11-28 18:16:49.061866304 +0000 UTC m=+0.123660242 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 18:16:51 compute-0 nova_compute[189296]: 2025-11-28 18:16:51.135 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.984 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.984 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.984 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.985 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da5160>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.989 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.989 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.989 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.989 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.989 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.989 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.989 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.990 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.990 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.990 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.991 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.991 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.991 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.991 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.991 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.991 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.991 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.991 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.991 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.992 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.992 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.992 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.992 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.993 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.993 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.993 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.993 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.993 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.993 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.993 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.993 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.993 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.994 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.994 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.994 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.994 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.994 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.994 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.994 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.994 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.995 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.995 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.995 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.995 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.995 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.995 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.995 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.996 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:16:51.997 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:16:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:16:52.628 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:16:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:16:52.629 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:16:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:16:52.629 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:16:53 compute-0 nova_compute[189296]: 2025-11-28 18:16:53.543 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:56 compute-0 nova_compute[189296]: 2025-11-28 18:16:56.139 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:58 compute-0 podman[247737]: 2025-11-28 18:16:58.0180382 +0000 UTC m=+0.079556071 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:16:58 compute-0 nova_compute[189296]: 2025-11-28 18:16:58.546 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:16:59 compute-0 podman[203494]: time="2025-11-28T18:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:16:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:16:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4321 "" "Go-http-client/1.1"
Nov 28 18:17:01 compute-0 nova_compute[189296]: 2025-11-28 18:17:01.144 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:01 compute-0 openstack_network_exporter[205632]: ERROR   18:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:17:01 compute-0 openstack_network_exporter[205632]: ERROR   18:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:17:01 compute-0 openstack_network_exporter[205632]: ERROR   18:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:17:01 compute-0 openstack_network_exporter[205632]: ERROR   18:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:17:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:17:01 compute-0 openstack_network_exporter[205632]: ERROR   18:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:17:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:17:03 compute-0 nova_compute[189296]: 2025-11-28 18:17:03.549 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:04 compute-0 ovn_controller[97771]: 2025-11-28T18:17:04Z|00072|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 28 18:17:06 compute-0 nova_compute[189296]: 2025-11-28 18:17:06.156 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:08 compute-0 nova_compute[189296]: 2025-11-28 18:17:08.549 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:10 compute-0 podman[247760]: 2025-11-28 18:17:10.060334393 +0000 UTC m=+0.100450032 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 28 18:17:10 compute-0 podman[247762]: 2025-11-28 18:17:10.066184048 +0000 UTC m=+0.105688703 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd)
Nov 28 18:17:10 compute-0 podman[247761]: 2025-11-28 18:17:10.072796789 +0000 UTC m=+0.107758063 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 28 18:17:10 compute-0 nova_compute[189296]: 2025-11-28 18:17:10.674 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:17:11 compute-0 nova_compute[189296]: 2025-11-28 18:17:11.161 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:13 compute-0 nova_compute[189296]: 2025-11-28 18:17:13.552 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:14 compute-0 nova_compute[189296]: 2025-11-28 18:17:14.628 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:17:14 compute-0 nova_compute[189296]: 2025-11-28 18:17:14.629 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:17:14 compute-0 nova_compute[189296]: 2025-11-28 18:17:14.629 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:17:14 compute-0 nova_compute[189296]: 2025-11-28 18:17:14.946 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 18:17:14 compute-0 nova_compute[189296]: 2025-11-28 18:17:14.946 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:17:15 compute-0 podman[247816]: 2025-11-28 18:17:15.000133657 +0000 UTC m=+0.063898527 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Nov 28 18:17:15 compute-0 podman[247817]: 2025-11-28 18:17:15.008689487 +0000 UTC m=+0.066968603 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Nov 28 18:17:16 compute-0 nova_compute[189296]: 2025-11-28 18:17:16.164 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:16 compute-0 nova_compute[189296]: 2025-11-28 18:17:16.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:17:17 compute-0 podman[247854]: 2025-11-28 18:17:17.02630024 +0000 UTC m=+0.081450619 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., version=9.4, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 28 18:17:17 compute-0 podman[247853]: 2025-11-28 18:17:17.028905153 +0000 UTC m=+0.090770546 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:17:17 compute-0 nova_compute[189296]: 2025-11-28 18:17:17.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:17:18 compute-0 nova_compute[189296]: 2025-11-28 18:17:18.556 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:18 compute-0 nova_compute[189296]: 2025-11-28 18:17:18.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:17:18 compute-0 nova_compute[189296]: 2025-11-28 18:17:18.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:17:19 compute-0 nova_compute[189296]: 2025-11-28 18:17:19.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:17:19 compute-0 nova_compute[189296]: 2025-11-28 18:17:19.725 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:19 compute-0 nova_compute[189296]: 2025-11-28 18:17:19.726 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:19 compute-0 nova_compute[189296]: 2025-11-28 18:17:19.726 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:19 compute-0 nova_compute[189296]: 2025-11-28 18:17:19.727 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:17:19 compute-0 nova_compute[189296]: 2025-11-28 18:17:19.998 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.021 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:20 compute-0 podman[247896]: 2025-11-28 18:17:20.116648482 +0000 UTC m=+0.171686600 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.162 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.164 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5393MB free_disk=72.38032913208008GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.164 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.164 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.325 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.326 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.352 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.528 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.531 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:17:20 compute-0 nova_compute[189296]: 2025-11-28 18:17:20.532 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:21 compute-0 nova_compute[189296]: 2025-11-28 18:17:21.170 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:21 compute-0 nova_compute[189296]: 2025-11-28 18:17:21.899 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:22 compute-0 nova_compute[189296]: 2025-11-28 18:17:22.603 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:22 compute-0 nova_compute[189296]: 2025-11-28 18:17:22.747 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:23 compute-0 nova_compute[189296]: 2025-11-28 18:17:23.533 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:17:23 compute-0 nova_compute[189296]: 2025-11-28 18:17:23.557 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:24 compute-0 nova_compute[189296]: 2025-11-28 18:17:24.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:17:26 compute-0 nova_compute[189296]: 2025-11-28 18:17:26.175 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:28 compute-0 nova_compute[189296]: 2025-11-28 18:17:28.561 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:29 compute-0 podman[247923]: 2025-11-28 18:17:29.000349321 +0000 UTC m=+0.061872458 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:17:29 compute-0 nova_compute[189296]: 2025-11-28 18:17:29.700 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:29 compute-0 podman[203494]: time="2025-11-28T18:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:17:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:17:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4320 "" "Go-http-client/1.1"
Nov 28 18:17:30 compute-0 nova_compute[189296]: 2025-11-28 18:17:30.534 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:31 compute-0 nova_compute[189296]: 2025-11-28 18:17:31.179 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:31 compute-0 openstack_network_exporter[205632]: ERROR   18:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:17:31 compute-0 openstack_network_exporter[205632]: ERROR   18:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:17:31 compute-0 openstack_network_exporter[205632]: ERROR   18:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:17:31 compute-0 openstack_network_exporter[205632]: ERROR   18:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:17:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:17:31 compute-0 openstack_network_exporter[205632]: ERROR   18:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:17:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:17:33 compute-0 nova_compute[189296]: 2025-11-28 18:17:33.562 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:34.758 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:17:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:34.759 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:17:34 compute-0 nova_compute[189296]: 2025-11-28 18:17:34.761 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:35.761 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:17:36 compute-0 nova_compute[189296]: 2025-11-28 18:17:36.180 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:37 compute-0 nova_compute[189296]: 2025-11-28 18:17:37.121 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:37 compute-0 nova_compute[189296]: 2025-11-28 18:17:37.954 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:38 compute-0 nova_compute[189296]: 2025-11-28 18:17:38.215 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:38 compute-0 nova_compute[189296]: 2025-11-28 18:17:38.565 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:41 compute-0 podman[247947]: 2025-11-28 18:17:41.006611913 +0000 UTC m=+0.071556685 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 28 18:17:41 compute-0 podman[247949]: 2025-11-28 18:17:41.023912007 +0000 UTC m=+0.078265320 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:17:41 compute-0 podman[247948]: 2025-11-28 18:17:41.043000815 +0000 UTC m=+0.092471908 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute)
Nov 28 18:17:41 compute-0 nova_compute[189296]: 2025-11-28 18:17:41.186 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.166 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.167 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.187 189300 DEBUG nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.321 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.322 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.332 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.332 189300 INFO nova.compute.claims [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.483 189300 DEBUG nova.compute.provider_tree [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.499 189300 DEBUG nova.scheduler.client.report [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.519 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.520 189300 DEBUG nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.575 189300 DEBUG nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.576 189300 DEBUG nova.network.neutron [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.593 189300 INFO nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.615 189300 DEBUG nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.728 189300 DEBUG nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.730 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.730 189300 INFO nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Creating image(s)#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.731 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "/var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.732 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "/var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.732 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "/var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.733 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.734 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:42 compute-0 nova_compute[189296]: 2025-11-28 18:17:42.911 189300 DEBUG nova.policy [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '28259861c020436091f3ab3eb680fa5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fb27a9d222b44ca3a79da5ce054611e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 28 18:17:43 compute-0 nova_compute[189296]: 2025-11-28 18:17:43.568 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:44 compute-0 nova_compute[189296]: 2025-11-28 18:17:44.858 189300 DEBUG nova.network.neutron [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Successfully created port: 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 18:17:44 compute-0 nova_compute[189296]: 2025-11-28 18:17:44.897 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:44 compute-0 nova_compute[189296]: 2025-11-28 18:17:44.953 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c.part --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:44 compute-0 nova_compute[189296]: 2025-11-28 18:17:44.955 189300 DEBUG nova.virt.images [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] ffec9e61-65fb-46ae-8d34-338639229ec3 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 28 18:17:44 compute-0 nova_compute[189296]: 2025-11-28 18:17:44.956 189300 DEBUG nova.privsep.utils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 28 18:17:44 compute-0 nova_compute[189296]: 2025-11-28 18:17:44.957 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c.part /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.208 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c.part /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c.converted" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.214 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.267 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c.converted --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.268 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.280 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.371 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.373 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.375 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.399 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.492 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.504 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.546 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.547 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.548 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.616 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.618 189300 DEBUG nova.virt.disk.api [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Checking if we can resize image /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.618 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.713 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.714 189300 DEBUG nova.virt.disk.api [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Cannot resize image /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.715 189300 DEBUG nova.objects.instance [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 9d9438df-a3bc-4004-95a3-0d76f449fe7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.733 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.733 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Ensure instance console log exists: /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.734 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.734 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:45 compute-0 nova_compute[189296]: 2025-11-28 18:17:45.734 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:46 compute-0 podman[248033]: 2025-11-28 18:17:46.032485065 +0000 UTC m=+0.079836918 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:17:46 compute-0 podman[248032]: 2025-11-28 18:17:46.058853492 +0000 UTC m=+0.112669103 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 28 18:17:46 compute-0 nova_compute[189296]: 2025-11-28 18:17:46.190 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:48 compute-0 podman[248068]: 2025-11-28 18:17:48.0269277 +0000 UTC m=+0.085950208 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:17:48 compute-0 podman[248069]: 2025-11-28 18:17:48.055725886 +0000 UTC m=+0.106238326 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.tags=base rhel9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, config_id=edpm, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, distribution-scope=public)
Nov 28 18:17:48 compute-0 nova_compute[189296]: 2025-11-28 18:17:48.572 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:49 compute-0 nova_compute[189296]: 2025-11-28 18:17:49.235 189300 DEBUG nova.network.neutron [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Successfully updated port: 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:17:49 compute-0 nova_compute[189296]: 2025-11-28 18:17:49.278 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:17:49 compute-0 nova_compute[189296]: 2025-11-28 18:17:49.279 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquired lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:17:49 compute-0 nova_compute[189296]: 2025-11-28 18:17:49.279 189300 DEBUG nova.network.neutron [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:17:49 compute-0 nova_compute[189296]: 2025-11-28 18:17:49.758 189300 DEBUG nova.network.neutron [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:17:51 compute-0 podman[248114]: 2025-11-28 18:17:51.084298173 +0000 UTC m=+0.138662771 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 28 18:17:51 compute-0 nova_compute[189296]: 2025-11-28 18:17:51.192 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:52.630 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:52.630 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:52.631 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.817 189300 DEBUG nova.network.neutron [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Updating instance_info_cache with network_info: [{"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.962 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Releasing lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.962 189300 DEBUG nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Instance network_info: |[{"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.967 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Start _get_guest_xml network_info=[{"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.976 189300 WARNING nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.987 189300 DEBUG nova.virt.libvirt.host [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.988 189300 DEBUG nova.virt.libvirt.host [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.993 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.996 189300 DEBUG nova.virt.libvirt.host [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.996 189300 DEBUG nova.virt.libvirt.host [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.997 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.997 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.998 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.998 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.998 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.998 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.998 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.998 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.999 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:17:52 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.999 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.999 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:52.999 189300 DEBUG nova.virt.hardware [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.003 189300 DEBUG nova.virt.libvirt.vif [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-841468157',display_name='tempest-ServersTestManualDisk-server-841468157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-841468157',id=7,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF0lU3reW+r6+CL4oiKiTJeTxvoYGtNnyZC7K2JFkFHBUYEDbAZx3apgSql2jHITUVC9Q5dSP2o1/FA3PKXjtRYzKuW2OQzECF5F4nGtMC9kKi5U05uhynuj7W2UehWBBw==',key_name='tempest-keypair-617998503',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb27a9d222b44ca3a79da5ce054611e5',ramdisk_id='',reservation_id='r-65xdvbo8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1661420842',owner_user_name='tempest-ServersTestManualDisk-1661420842-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:17:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='28259861c020436091f3ab3eb680fa5d',uuid=9d9438df-a3bc-4004-95a3-0d76f449fe7e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.003 189300 DEBUG nova.network.os_vif_util [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Converting VIF {"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.003 189300 DEBUG nova.network.os_vif_util [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:08,bridge_name='br-int',has_traffic_filtering=True,id=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98,network=Network(e87bc234-f5cf-4903-8735-1e50c5da2392),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c9a98c5-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.004 189300 DEBUG nova.objects.instance [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d9438df-a3bc-4004-95a3-0d76f449fe7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.075 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <uuid>9d9438df-a3bc-4004-95a3-0d76f449fe7e</uuid>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <name>instance-00000007</name>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <nova:name>tempest-ServersTestManualDisk-server-841468157</nova:name>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:17:52</nova:creationTime>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:17:53 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:17:53 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:17:53 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:17:53 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:17:53 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:17:53 compute-0 nova_compute[189296]:        <nova:user uuid="28259861c020436091f3ab3eb680fa5d">tempest-ServersTestManualDisk-1661420842-project-member</nova:user>
Nov 28 18:17:53 compute-0 nova_compute[189296]:        <nova:project uuid="fb27a9d222b44ca3a79da5ce054611e5">tempest-ServersTestManualDisk-1661420842</nova:project>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="ffec9e61-65fb-46ae-8d34-338639229ec3"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:17:53 compute-0 nova_compute[189296]:        <nova:port uuid="0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98">
Nov 28 18:17:53 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <system>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <entry name="serial">9d9438df-a3bc-4004-95a3-0d76f449fe7e</entry>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <entry name="uuid">9d9438df-a3bc-4004-95a3-0d76f449fe7e</entry>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    </system>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <os>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  </os>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <features>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  </features>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk.config"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:84:73:08"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <target dev="tap0c9a98c5-1b"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/console.log" append="off"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <video>
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    </video>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:17:53 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:17:53 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:17:53 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:17:53 compute-0 nova_compute[189296]: </domain>
Nov 28 18:17:53 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.077 189300 DEBUG nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Preparing to wait for external event network-vif-plugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.077 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.077 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.078 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.078 189300 DEBUG nova.virt.libvirt.vif [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-841468157',display_name='tempest-ServersTestManualDisk-server-841468157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-841468157',id=7,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF0lU3reW+r6+CL4oiKiTJeTxvoYGtNnyZC7K2JFkFHBUYEDbAZx3apgSql2jHITUVC9Q5dSP2o1/FA3PKXjtRYzKuW2OQzECF5F4nGtMC9kKi5U05uhynuj7W2UehWBBw==',key_name='tempest-keypair-617998503',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fb27a9d222b44ca3a79da5ce054611e5',ramdisk_id='',reservation_id='r-65xdvbo8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1661420842',owner_user_name='tempest-ServersTestManualDisk-1661420842-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:17:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='28259861c020436091f3ab3eb680fa5d',uuid=9d9438df-a3bc-4004-95a3-0d76f449fe7e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.079 189300 DEBUG nova.network.os_vif_util [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Converting VIF {"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.079 189300 DEBUG nova.network.os_vif_util [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:08,bridge_name='br-int',has_traffic_filtering=True,id=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98,network=Network(e87bc234-f5cf-4903-8735-1e50c5da2392),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c9a98c5-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.080 189300 DEBUG os_vif [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:08,bridge_name='br-int',has_traffic_filtering=True,id=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98,network=Network(e87bc234-f5cf-4903-8735-1e50c5da2392),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c9a98c5-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.080 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.081 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.081 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.085 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.085 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c9a98c5-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.086 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0c9a98c5-1b, col_values=(('external_ids', {'iface-id': '0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:73:08', 'vm-uuid': '9d9438df-a3bc-4004-95a3-0d76f449fe7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.088 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:53 compute-0 NetworkManager[56307]: <info>  [1764353873.0887] manager: (tap0c9a98c5-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.090 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.095 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.096 189300 INFO os_vif [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:08,bridge_name='br-int',has_traffic_filtering=True,id=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98,network=Network(e87bc234-f5cf-4903-8735-1e50c5da2392),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c9a98c5-1b')#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.169 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.170 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.170 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] No VIF found with MAC fa:16:3e:84:73:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.170 189300 INFO nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Using config drive#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.574 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.810 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquiring lock "c0b50299-41b1-48cf-b075-08ca569a1bd5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:53 compute-0 nova_compute[189296]: 2025-11-28 18:17:53.810 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.356 189300 DEBUG nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.370 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.370 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.406 189300 DEBUG nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.457 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.457 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.468 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.469 189300 INFO nova.compute.claims [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.529 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.900 189300 DEBUG nova.compute.provider_tree [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.929 189300 DEBUG nova.scheduler.client.report [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.983 189300 INFO nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Creating config drive at /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk.config#033[00m
Nov 28 18:17:54 compute-0 nova_compute[189296]: 2025-11-28 18:17:54.989 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpismgxk3_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.111 189300 DEBUG oslo_concurrency.processutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpismgxk3_" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:55 compute-0 kernel: tap0c9a98c5-1b: entered promiscuous mode
Nov 28 18:17:55 compute-0 NetworkManager[56307]: <info>  [1764353875.1847] manager: (tap0c9a98c5-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Nov 28 18:17:55 compute-0 ovn_controller[97771]: 2025-11-28T18:17:55Z|00073|binding|INFO|Claiming lport 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 for this chassis.
Nov 28 18:17:55 compute-0 ovn_controller[97771]: 2025-11-28T18:17:55Z|00074|binding|INFO|0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98: Claiming fa:16:3e:84:73:08 10.100.0.9
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.186 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.199 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:55 compute-0 ovn_controller[97771]: 2025-11-28T18:17:55Z|00075|binding|INFO|Setting lport 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 ovn-installed in OVS
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.204 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:55 compute-0 systemd-machined[155703]: New machine qemu-7-instance-00000007.
Nov 28 18:17:55 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 28 18:17:55 compute-0 systemd-udevd[248157]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:17:55 compute-0 NetworkManager[56307]: <info>  [1764353875.2677] device (tap0c9a98c5-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:17:55 compute-0 NetworkManager[56307]: <info>  [1764353875.2684] device (tap0c9a98c5-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.337 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:73:08 10.100.0.9'], port_security=['fa:16:3e:84:73:08 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9d9438df-a3bc-4004-95a3-0d76f449fe7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87bc234-f5cf-4903-8735-1e50c5da2392', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb27a9d222b44ca3a79da5ce054611e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ed42db5-cc07-4ced-9aa8-8eb1c68cde2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc56f1c9-cc2d-473f-b3d6-7ae98cc4845e, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:17:55 compute-0 ovn_controller[97771]: 2025-11-28T18:17:55Z|00076|binding|INFO|Setting lport 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 up in Southbound
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.339 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 in datapath e87bc234-f5cf-4903-8735-1e50c5da2392 bound to our chassis#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.341 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e87bc234-f5cf-4903-8735-1e50c5da2392#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.349 189300 DEBUG nova.compute.manager [req-6f2b6c8d-c553-4858-af83-ba07c3c6cdb6 req-de04151b-ca3e-4674-a5e1-9d22116f142b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Received event network-changed-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.350 189300 DEBUG nova.compute.manager [req-6f2b6c8d-c553-4858-af83-ba07c3c6cdb6 req-de04151b-ca3e-4674-a5e1-9d22116f142b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Refreshing instance network info cache due to event network-changed-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.350 189300 DEBUG oslo_concurrency.lockutils [req-6f2b6c8d-c553-4858-af83-ba07c3c6cdb6 req-de04151b-ca3e-4674-a5e1-9d22116f142b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.350 189300 DEBUG oslo_concurrency.lockutils [req-6f2b6c8d-c553-4858-af83-ba07c3c6cdb6 req-de04151b-ca3e-4674-a5e1-9d22116f142b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.351 189300 DEBUG nova.network.neutron [req-6f2b6c8d-c553-4858-af83-ba07c3c6cdb6 req-de04151b-ca3e-4674-a5e1-9d22116f142b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Refreshing network info cache for port 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.359 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[64fc57cc-e380-4740-9420-52c0c90d11b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.360 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape87bc234-f1 in ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.362 238909 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape87bc234-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.363 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[7273a738-858a-444f-8498-2693d02b723d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.364 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[de462e8f-74ae-417f-8346-c3d8e547d17c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.380 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.381 189300 DEBUG nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.382 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[639a8d90-82bd-48b3-bc27-5257a788cd4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.386 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.405 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.405 189300 INFO nova.compute.claims [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.412 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc4a823-cbc5-44c1-85bd-c90df4d89e4c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.442 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[58eb2257-4557-489a-a82c-50202d8fc03d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 NetworkManager[56307]: <info>  [1764353875.4565] manager: (tape87bc234-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Nov 28 18:17:55 compute-0 systemd-udevd[248159]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.455 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[66a9daa6-525c-40ed-8e63-9c18a956f61d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.480 189300 DEBUG nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.481 189300 DEBUG nova.network.neutron [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.502 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe6fa54-8c5d-441b-ac8c-6b490228feca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.505 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[93dc5094-52c2-40cf-ab36-f3737496ec33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 NetworkManager[56307]: <info>  [1764353875.5322] device (tape87bc234-f0): carrier: link connected
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.540 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[d956f5ae-448d-4980-a514-5ea6568a7b7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.560 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[3562d59f-ee8d-45d1-9c62-62985a00c7d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape87bc234-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:4f:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502112, 'reachable_time': 32863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248197, 'error': None, 'target': 'ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.574 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c28b5c-fafb-4b9d-a602-e343071dd321]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:4f97'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502112, 'tstamp': 502112}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248198, 'error': None, 'target': 'ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.584 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353875.5833359, 9d9438df-a3bc-4004-95a3-0d76f449fe7e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.584 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] VM Started (Lifecycle Event)#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.594 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[149b385e-cb6f-42dd-b0f4-4db311423322]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape87bc234-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:4f:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502112, 'reachable_time': 32863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248199, 'error': None, 'target': 'ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.624 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[7be386e9-d169-4c02-8e55-1630ee10bc76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.671 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[8d951475-92de-4a38-beb7-27506cc009e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.672 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87bc234-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.673 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.673 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape87bc234-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:17:55 compute-0 NetworkManager[56307]: <info>  [1764353875.6758] manager: (tape87bc234-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 28 18:17:55 compute-0 kernel: tape87bc234-f0: entered promiscuous mode
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.675 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.678 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape87bc234-f0, col_values=(('external_ids', {'iface-id': '887c8718-c327-47ee-a268-31ddec78a450'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.679 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:55 compute-0 ovn_controller[97771]: 2025-11-28T18:17:55Z|00077|binding|INFO|Releasing lport 887c8718-c327-47ee-a268-31ddec78a450 from this chassis (sb_readonly=0)
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.681 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.682 106624 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e87bc234-f5cf-4903-8735-1e50c5da2392.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e87bc234-f5cf-4903-8735-1e50c5da2392.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.683 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[917e130b-e6cf-4b7d-8cf8-50f20347bf9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.684 106624 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: global
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    log         /dev/log local0 debug
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    log-tag     haproxy-metadata-proxy-e87bc234-f5cf-4903-8735-1e50c5da2392
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    user        root
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    group       root
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    maxconn     1024
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    pidfile     /var/lib/neutron/external/pids/e87bc234-f5cf-4903-8735-1e50c5da2392.pid.haproxy
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    daemon
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: defaults
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    log global
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    mode http
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    option httplog
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    option dontlognull
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    option http-server-close
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    option forwardfor
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    retries                 3
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    timeout http-request    30s
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    timeout connect         30s
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    timeout client          32s
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    timeout server          32s
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    timeout http-keep-alive 30s
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: listen listener
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    bind 169.254.169.254:80
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    server metadata /var/lib/neutron/metadata_proxy
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]:    http-request add-header X-OVN-Network-ID e87bc234-f5cf-4903-8735-1e50c5da2392
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 28 18:17:55 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:17:55.685 106624 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392', 'env', 'PROCESS_TAG=haproxy-e87bc234-f5cf-4903-8735-1e50c5da2392', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e87bc234-f5cf-4903-8735-1e50c5da2392.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.694 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.809 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.814 189300 INFO nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.821 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353875.5836773, 9d9438df-a3bc-4004-95a3-0d76f449fe7e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.822 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.849 189300 DEBUG nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.855 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.861 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.888 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.939 189300 DEBUG nova.compute.provider_tree [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.958 189300 DEBUG nova.scheduler.client.report [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.984 189300 DEBUG nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.985 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.986 189300 INFO nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Creating image(s)#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.986 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquiring lock "/var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.987 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "/var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:55 compute-0 nova_compute[189296]: 2025-11-28 18:17:55.988 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "/var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.000 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.001 189300 DEBUG nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.004 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.051 189300 DEBUG nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.052 189300 DEBUG nova.network.neutron [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.072 189300 DEBUG nova.policy [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bbe93898827d4d57a49114a72388c0ab', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e27f3ae6d694d7ca975b778b997e12f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.075 189300 INFO nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.089 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.090 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquiring lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.091 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.102 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.117 189300 DEBUG nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:17:56 compute-0 podman[248230]: 2025-11-28 18:17:56.032628075 +0000 UTC m=+0.028279494 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.160 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.161 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.225 189300 DEBUG nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.229 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.229 189300 INFO nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Creating image(s)#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.230 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "/var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.231 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "/var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.231 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "/var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.243 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.297 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.298 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.353 189300 DEBUG nova.policy [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f140e7d00b1542d087d5f92a53ef5082', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '05214746198d48dea7b8b3617f29cb40', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.949 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk 1073741824" returned: 0 in 0.788s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.951 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.951 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:56 compute-0 podman[248230]: 2025-11-28 18:17:56.955672775 +0000 UTC m=+0.951324214 container create 658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.982 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:56 compute-0 nova_compute[189296]: 2025-11-28 18:17:56.998 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:57 compute-0 systemd[1]: Started libpod-conmon-658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7.scope.
Nov 28 18:17:57 compute-0 systemd[1]: Started libcrun container.
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.053 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.055 189300 DEBUG nova.virt.disk.api [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Checking if we can resize image /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.055 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469ebe7bc3eaf000933fbb07d2f451743a6185fd324064f80d4775274da65bdc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.070 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.071 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:57 compute-0 podman[248230]: 2025-11-28 18:17:57.076315702 +0000 UTC m=+1.071967121 container init 658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 28 18:17:57 compute-0 podman[248230]: 2025-11-28 18:17:57.084097733 +0000 UTC m=+1.079749132 container start 658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 18:17:57 compute-0 neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392[248258]: [NOTICE]   (248268) : New worker (248273) forked
Nov 28 18:17:57 compute-0 neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392[248258]: [NOTICE]   (248268) : Loading success.
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.112 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.114 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.114 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.131 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.134 189300 DEBUG nova.virt.disk.api [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Cannot resize image /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.134 189300 DEBUG nova.objects.instance [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lazy-loading 'migration_context' on Instance uuid c0b50299-41b1-48cf-b075-08ca569a1bd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.180 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.181 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Ensure instance console log exists: /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.181 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.182 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.182 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.185 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.185 189300 DEBUG nova.virt.disk.api [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Checking if we can resize image /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.186 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.242 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.250 189300 DEBUG nova.virt.disk.api [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Cannot resize image /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.252 189300 DEBUG nova.objects.instance [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lazy-loading 'migration_context' on Instance uuid 1b9021c0-08c4-448d-9f6c-a589a543fb4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.271 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.272 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Ensure instance console log exists: /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.272 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.273 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.273 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:17:57 compute-0 nova_compute[189296]: 2025-11-28 18:17:57.526 189300 DEBUG nova.network.neutron [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Successfully created port: c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 18:17:57 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 28 18:17:57 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 28 18:17:58 compute-0 nova_compute[189296]: 2025-11-28 18:17:58.091 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:58 compute-0 nova_compute[189296]: 2025-11-28 18:17:58.577 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.125 189300 DEBUG nova.network.neutron [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Successfully updated port: c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.144 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.145 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquired lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.145 189300 DEBUG nova.network.neutron [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.355 189300 DEBUG nova.network.neutron [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.590 189300 DEBUG nova.compute.manager [req-01a9a55b-49a9-4a52-a347-94068169ebfa req-2e876a19-7cc8-4447-8e6a-55a58a8ee852 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received event network-changed-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.591 189300 DEBUG nova.compute.manager [req-01a9a55b-49a9-4a52-a347-94068169ebfa req-2e876a19-7cc8-4447-8e6a-55a58a8ee852 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Refreshing instance network info cache due to event network-changed-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.592 189300 DEBUG oslo_concurrency.lockutils [req-01a9a55b-49a9-4a52-a347-94068169ebfa req-2e876a19-7cc8-4447-8e6a-55a58a8ee852 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:17:59 compute-0 ovn_controller[97771]: 2025-11-28T18:17:59Z|00078|binding|INFO|Releasing lport 887c8718-c327-47ee-a268-31ddec78a450 from this chassis (sb_readonly=0)
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.623 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.716 189300 DEBUG nova.network.neutron [req-6f2b6c8d-c553-4858-af83-ba07c3c6cdb6 req-de04151b-ca3e-4674-a5e1-9d22116f142b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Updated VIF entry in instance network info cache for port 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.717 189300 DEBUG nova.network.neutron [req-6f2b6c8d-c553-4858-af83-ba07c3c6cdb6 req-de04151b-ca3e-4674-a5e1-9d22116f142b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Updating instance_info_cache with network_info: [{"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:17:59 compute-0 podman[203494]: time="2025-11-28T18:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:17:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.755 189300 DEBUG oslo_concurrency.lockutils [req-6f2b6c8d-c553-4858-af83-ba07c3c6cdb6 req-de04151b-ca3e-4674-a5e1-9d22116f142b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:17:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.798 189300 DEBUG nova.network.neutron [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Successfully created port: 6c1cb38b-9fde-458f-a36b-d1c95b04690c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 18:17:59 compute-0 ovn_controller[97771]: 2025-11-28T18:17:59Z|00079|binding|INFO|Releasing lport 887c8718-c327-47ee-a268-31ddec78a450 from this chassis (sb_readonly=0)
Nov 28 18:17:59 compute-0 nova_compute[189296]: 2025-11-28 18:17:59.855 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:17:59 compute-0 podman[248310]: 2025-11-28 18:17:59.998267167 +0000 UTC m=+0.062071884 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.371 189300 DEBUG nova.network.neutron [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updating instance_info_cache with network_info: [{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.395 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Releasing lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.395 189300 DEBUG nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Instance network_info: |[{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.396 189300 DEBUG oslo_concurrency.lockutils [req-01a9a55b-49a9-4a52-a347-94068169ebfa req-2e876a19-7cc8-4447-8e6a-55a58a8ee852 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.396 189300 DEBUG nova.network.neutron [req-01a9a55b-49a9-4a52-a347-94068169ebfa req-2e876a19-7cc8-4447-8e6a-55a58a8ee852 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Refreshing network info cache for port c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.398 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Start _get_guest_xml network_info=[{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.405 189300 WARNING nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.415 189300 DEBUG nova.virt.libvirt.host [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.415 189300 DEBUG nova.virt.libvirt.host [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.420 189300 DEBUG nova.virt.libvirt.host [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.421 189300 DEBUG nova.virt.libvirt.host [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.422 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.422 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.423 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.424 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.424 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.425 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.425 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.426 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.427 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.427 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.428 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.428 189300 DEBUG nova.virt.hardware [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.433 189300 DEBUG nova.virt.libvirt.vif [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-403870488',display_name='tempest-AttachInterfacesUnderV243Test-server-403870488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-403870488',id=9,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPncI9of+mH+7uV43WSH0h6v0tb4ecdPAqEEgZeWgO3O4t7/yOoQtm5GFO9PNSzxMORfBEH14/GC/3Lk3DyzrmiLz758VzhRyMdlYe9lNVTfz8ynkWxJ/dx+73eKT+nC6g==',key_name='tempest-keypair-20086383',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='05214746198d48dea7b8b3617f29cb40',ramdisk_id='',reservation_id='r-7m48njdu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-732631617',owner_user_name='tempest-AttachInterfacesUnderV243Test-732631617-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:17:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f140e7d00b1542d087d5f92a53ef5082',uuid=1b9021c0-08c4-448d-9f6c-a589a543fb4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.434 189300 DEBUG nova.network.os_vif_util [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Converting VIF {"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.435 189300 DEBUG nova.network.os_vif_util [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:70:8b,bridge_name='br-int',has_traffic_filtering=True,id=c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6,network=Network(c1532d46-30e4-42ec-9ba7-4dc79dd935a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a2ec90-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.437 189300 DEBUG nova.objects.instance [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b9021c0-08c4-448d-9f6c-a589a543fb4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.466 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <uuid>1b9021c0-08c4-448d-9f6c-a589a543fb4c</uuid>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <name>instance-00000009</name>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-403870488</nova:name>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:18:00</nova:creationTime>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:18:00 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:18:00 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:18:00 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:18:00 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:18:00 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:18:00 compute-0 nova_compute[189296]:        <nova:user uuid="f140e7d00b1542d087d5f92a53ef5082">tempest-AttachInterfacesUnderV243Test-732631617-project-member</nova:user>
Nov 28 18:18:00 compute-0 nova_compute[189296]:        <nova:project uuid="05214746198d48dea7b8b3617f29cb40">tempest-AttachInterfacesUnderV243Test-732631617</nova:project>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="ffec9e61-65fb-46ae-8d34-338639229ec3"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:18:00 compute-0 nova_compute[189296]:        <nova:port uuid="c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6">
Nov 28 18:18:00 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <system>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <entry name="serial">1b9021c0-08c4-448d-9f6c-a589a543fb4c</entry>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <entry name="uuid">1b9021c0-08c4-448d-9f6c-a589a543fb4c</entry>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    </system>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <os>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  </os>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <features>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  </features>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.config"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:3f:70:8b"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <target dev="tapc1a2ec90-a4"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/console.log" append="off"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <video>
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    </video>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:18:00 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:18:00 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:18:00 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:18:00 compute-0 nova_compute[189296]: </domain>
Nov 28 18:18:00 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.467 189300 DEBUG nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Preparing to wait for external event network-vif-plugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.468 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.468 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.468 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.469 189300 DEBUG nova.virt.libvirt.vif [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-403870488',display_name='tempest-AttachInterfacesUnderV243Test-server-403870488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-403870488',id=9,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPncI9of+mH+7uV43WSH0h6v0tb4ecdPAqEEgZeWgO3O4t7/yOoQtm5GFO9PNSzxMORfBEH14/GC/3Lk3DyzrmiLz758VzhRyMdlYe9lNVTfz8ynkWxJ/dx+73eKT+nC6g==',key_name='tempest-keypair-20086383',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='05214746198d48dea7b8b3617f29cb40',ramdisk_id='',reservation_id='r-7m48njdu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-732631617',owner_user_name='tempest-AttachInterfacesUnderV243Test-732631617-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:17:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f140e7d00b1542d087d5f92a53ef5082',uuid=1b9021c0-08c4-448d-9f6c-a589a543fb4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.469 189300 DEBUG nova.network.os_vif_util [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Converting VIF {"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.470 189300 DEBUG nova.network.os_vif_util [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:70:8b,bridge_name='br-int',has_traffic_filtering=True,id=c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6,network=Network(c1532d46-30e4-42ec-9ba7-4dc79dd935a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a2ec90-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.471 189300 DEBUG os_vif [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:70:8b,bridge_name='br-int',has_traffic_filtering=True,id=c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6,network=Network(c1532d46-30e4-42ec-9ba7-4dc79dd935a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a2ec90-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.471 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.472 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.472 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.476 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.476 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1a2ec90-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.476 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1a2ec90-a4, col_values=(('external_ids', {'iface-id': 'c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:70:8b', 'vm-uuid': '1b9021c0-08c4-448d-9f6c-a589a543fb4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:00 compute-0 NetworkManager[56307]: <info>  [1764353880.4796] manager: (tapc1a2ec90-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.480 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.481 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.488 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.489 189300 INFO os_vif [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:70:8b,bridge_name='br-int',has_traffic_filtering=True,id=c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6,network=Network(c1532d46-30e4-42ec-9ba7-4dc79dd935a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a2ec90-a4')#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.556 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.557 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.557 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] No VIF found with MAC fa:16:3e:3f:70:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:18:00 compute-0 nova_compute[189296]: 2025-11-28 18:18:00.558 189300 INFO nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Using config drive#033[00m
Nov 28 18:18:01 compute-0 openstack_network_exporter[205632]: ERROR   18:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:18:01 compute-0 openstack_network_exporter[205632]: ERROR   18:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:18:01 compute-0 openstack_network_exporter[205632]: ERROR   18:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:18:01 compute-0 openstack_network_exporter[205632]: ERROR   18:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:18:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:18:01 compute-0 openstack_network_exporter[205632]: ERROR   18:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:18:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:18:01 compute-0 nova_compute[189296]: 2025-11-28 18:18:01.646 189300 INFO nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Creating config drive at /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.config#033[00m
Nov 28 18:18:01 compute-0 nova_compute[189296]: 2025-11-28 18:18:01.655 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpehz6yehm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:01 compute-0 nova_compute[189296]: 2025-11-28 18:18:01.787 189300 DEBUG oslo_concurrency.processutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpehz6yehm" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:01 compute-0 kernel: tapc1a2ec90-a4: entered promiscuous mode
Nov 28 18:18:01 compute-0 NetworkManager[56307]: <info>  [1764353881.8732] manager: (tapc1a2ec90-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Nov 28 18:18:01 compute-0 ovn_controller[97771]: 2025-11-28T18:18:01Z|00080|binding|INFO|Claiming lport c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 for this chassis.
Nov 28 18:18:01 compute-0 ovn_controller[97771]: 2025-11-28T18:18:01Z|00081|binding|INFO|c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6: Claiming fa:16:3e:3f:70:8b 10.100.0.4
Nov 28 18:18:01 compute-0 nova_compute[189296]: 2025-11-28 18:18:01.876 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.903 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:70:8b 10.100.0.4'], port_security=['fa:16:3e:3f:70:8b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1b9021c0-08c4-448d-9f6c-a589a543fb4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1532d46-30e4-42ec-9ba7-4dc79dd935a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05214746198d48dea7b8b3617f29cb40', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16efcad3-8c29-4cf4-abbd-eaf90a8b40f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=028cab25-8237-4062-b9d7-d9732783abc5, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.906 106624 INFO neutron.agent.ovn.metadata.agent [-] Port c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 in datapath c1532d46-30e4-42ec-9ba7-4dc79dd935a5 bound to our chassis#033[00m
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.908 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1532d46-30e4-42ec-9ba7-4dc79dd935a5#033[00m
Nov 28 18:18:01 compute-0 systemd-udevd[248351]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.922 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e9f30b-d5d9-45aa-a278-0c96a0fc6be6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.923 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc1532d46-31 in ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.925 238909 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc1532d46-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.925 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[00edbf78-3b0f-4880-853f-4fe52cf4446e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.928 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e58c2bae-bd59-4fda-8a86-27fd996567e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:01 compute-0 NetworkManager[56307]: <info>  [1764353881.9334] device (tapc1a2ec90-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:18:01 compute-0 NetworkManager[56307]: <info>  [1764353881.9382] device (tapc1a2ec90-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.942 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[d431ea9c-c8d3-4764-8ef6-97c1649b4ac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:01 compute-0 systemd-machined[155703]: New machine qemu-8-instance-00000009.
Nov 28 18:18:01 compute-0 nova_compute[189296]: 2025-11-28 18:18:01.960 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.962 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[c6012b58-019b-4774-a706-84777515c69c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:01 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000009.
Nov 28 18:18:01 compute-0 ovn_controller[97771]: 2025-11-28T18:18:01Z|00082|binding|INFO|Setting lport c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 ovn-installed in OVS
Nov 28 18:18:01 compute-0 ovn_controller[97771]: 2025-11-28T18:18:01Z|00083|binding|INFO|Setting lport c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 up in Southbound
Nov 28 18:18:01 compute-0 nova_compute[189296]: 2025-11-28 18:18:01.966 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:01.997 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[83e95b19-f890-407b-bd2e-4607ae50bbc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.004 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[1f31d42f-14ea-46a8-baed-4c6a6f23af0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 NetworkManager[56307]: <info>  [1764353882.0055] manager: (tapc1532d46-30): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Nov 28 18:18:02 compute-0 systemd-udevd[248357]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.052 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[621cf9b2-2f23-431f-891e-eec6d0efd058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.056 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[d20511d9-5f7f-4814-bcde-065a815fb9df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 NetworkManager[56307]: <info>  [1764353882.0866] device (tapc1532d46-30): carrier: link connected
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.093 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[04fecd8a-4d03-4ed7-9a15-b80f9aa5c092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.114 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc7c63d-51dc-4175-84a9-05a851b5c221]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1532d46-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:80:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502768, 'reachable_time': 21707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248387, 'error': None, 'target': 'ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.132 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4a42a7-4bc2-4864-b83a-d9289b0fd83e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:807f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502768, 'tstamp': 502768}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248388, 'error': None, 'target': 'ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.147 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[9490fafc-1c47-4748-89a1-c4a97cbf22c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1532d46-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:80:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502768, 'reachable_time': 21707, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248389, 'error': None, 'target': 'ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.189 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[bde804fc-c8a2-4907-8a88-d43957d7874a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.265 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[53ef01b9-538a-4283-807e-d1f6a7e7c7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.267 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1532d46-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.268 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.269 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1532d46-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.272 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:02 compute-0 kernel: tapc1532d46-30: entered promiscuous mode
Nov 28 18:18:02 compute-0 NetworkManager[56307]: <info>  [1764353882.2732] manager: (tapc1532d46-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.277 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1532d46-30, col_values=(('external_ids', {'iface-id': 'c8eddf3b-1e0b-416b-ad1a-748f52f665f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.279 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:02 compute-0 ovn_controller[97771]: 2025-11-28T18:18:02Z|00084|binding|INFO|Releasing lport c8eddf3b-1e0b-416b-ad1a-748f52f665f0 from this chassis (sb_readonly=0)
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.281 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.281 106624 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c1532d46-30e4-42ec-9ba7-4dc79dd935a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c1532d46-30e4-42ec-9ba7-4dc79dd935a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.283 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[f0cc4aa8-5936-4942-91cb-2406623937e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.284 106624 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: global
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    log         /dev/log local0 debug
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    log-tag     haproxy-metadata-proxy-c1532d46-30e4-42ec-9ba7-4dc79dd935a5
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    user        root
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    group       root
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    maxconn     1024
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    pidfile     /var/lib/neutron/external/pids/c1532d46-30e4-42ec-9ba7-4dc79dd935a5.pid.haproxy
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    daemon
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: defaults
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    log global
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    mode http
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    option httplog
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    option dontlognull
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    option http-server-close
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    option forwardfor
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    retries                 3
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    timeout http-request    30s
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    timeout connect         30s
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    timeout client          32s
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    timeout server          32s
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    timeout http-keep-alive 30s
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: listen listener
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    bind 169.254.169.254:80
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    server metadata /var/lib/neutron/metadata_proxy
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]:    http-request add-header X-OVN-Network-ID c1532d46-30e4-42ec-9ba7-4dc79dd935a5
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 28 18:18:02 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:02.288 106624 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5', 'env', 'PROCESS_TAG=haproxy-c1532d46-30e4-42ec-9ba7-4dc79dd935a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c1532d46-30e4-42ec-9ba7-4dc79dd935a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.293 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.674 189300 DEBUG nova.network.neutron [req-01a9a55b-49a9-4a52-a347-94068169ebfa req-2e876a19-7cc8-4447-8e6a-55a58a8ee852 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updated VIF entry in instance network info cache for port c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.674 189300 DEBUG nova.network.neutron [req-01a9a55b-49a9-4a52-a347-94068169ebfa req-2e876a19-7cc8-4447-8e6a-55a58a8ee852 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updating instance_info_cache with network_info: [{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.690 189300 DEBUG oslo_concurrency.lockutils [req-01a9a55b-49a9-4a52-a347-94068169ebfa req-2e876a19-7cc8-4447-8e6a-55a58a8ee852 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:02 compute-0 podman[248420]: 2025-11-28 18:18:02.720086243 +0000 UTC m=+0.065422595 container create 95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:18:02 compute-0 systemd[1]: Started libpod-conmon-95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066.scope.
Nov 28 18:18:02 compute-0 podman[248420]: 2025-11-28 18:18:02.684676474 +0000 UTC m=+0.030012856 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 18:18:02 compute-0 systemd[1]: Started libcrun container.
Nov 28 18:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89de99a7703d0392c1140feaa00e3cd73fc92ce4749cf19375e3a2e5c0d1969/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 18:18:02 compute-0 podman[248420]: 2025-11-28 18:18:02.809885854 +0000 UTC m=+0.155222206 container init 95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 28 18:18:02 compute-0 podman[248420]: 2025-11-28 18:18:02.817790907 +0000 UTC m=+0.163127239 container start 95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:18:02 compute-0 neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5[248440]: [NOTICE]   (248446) : New worker (248449) forked
Nov 28 18:18:02 compute-0 neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5[248440]: [NOTICE]   (248446) : Loading success.
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.878 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353882.87768, 1b9021c0-08c4-448d-9f6c-a589a543fb4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.878 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] VM Started (Lifecycle Event)#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.900 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.905 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353882.8777657, 1b9021c0-08c4-448d-9f6c-a589a543fb4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.906 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.925 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.932 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:18:02 compute-0 nova_compute[189296]: 2025-11-28 18:18:02.972 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:18:03 compute-0 nova_compute[189296]: 2025-11-28 18:18:03.289 189300 DEBUG nova.network.neutron [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Successfully updated port: 6c1cb38b-9fde-458f-a36b-d1c95b04690c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:18:03 compute-0 nova_compute[189296]: 2025-11-28 18:18:03.325 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquiring lock "refresh_cache-c0b50299-41b1-48cf-b075-08ca569a1bd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:03 compute-0 nova_compute[189296]: 2025-11-28 18:18:03.325 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquired lock "refresh_cache-c0b50299-41b1-48cf-b075-08ca569a1bd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:03 compute-0 nova_compute[189296]: 2025-11-28 18:18:03.325 189300 DEBUG nova.network.neutron [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:18:03 compute-0 nova_compute[189296]: 2025-11-28 18:18:03.579 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:03 compute-0 nova_compute[189296]: 2025-11-28 18:18:03.766 189300 DEBUG nova.network.neutron [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:18:03 compute-0 nova_compute[189296]: 2025-11-28 18:18:03.956 189300 DEBUG nova.compute.manager [req-2b3f88e4-4954-42c8-896b-d5aaf28b3c2f req-76f82a0c-a3b6-474a-baf3-6d723a259b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Received event network-changed-6c1cb38b-9fde-458f-a36b-d1c95b04690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:03 compute-0 nova_compute[189296]: 2025-11-28 18:18:03.957 189300 DEBUG nova.compute.manager [req-2b3f88e4-4954-42c8-896b-d5aaf28b3c2f req-76f82a0c-a3b6-474a-baf3-6d723a259b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Refreshing instance network info cache due to event network-changed-6c1cb38b-9fde-458f-a36b-d1c95b04690c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:18:03 compute-0 nova_compute[189296]: 2025-11-28 18:18:03.957 189300 DEBUG oslo_concurrency.lockutils [req-2b3f88e4-4954-42c8-896b-d5aaf28b3c2f req-76f82a0c-a3b6-474a-baf3-6d723a259b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-c0b50299-41b1-48cf-b075-08ca569a1bd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:04 compute-0 nova_compute[189296]: 2025-11-28 18:18:04.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.481 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.830 189300 DEBUG nova.network.neutron [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Updating instance_info_cache with network_info: [{"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.848 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Releasing lock "refresh_cache-c0b50299-41b1-48cf-b075-08ca569a1bd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.848 189300 DEBUG nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Instance network_info: |[{"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.849 189300 DEBUG oslo_concurrency.lockutils [req-2b3f88e4-4954-42c8-896b-d5aaf28b3c2f req-76f82a0c-a3b6-474a-baf3-6d723a259b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-c0b50299-41b1-48cf-b075-08ca569a1bd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.849 189300 DEBUG nova.network.neutron [req-2b3f88e4-4954-42c8-896b-d5aaf28b3c2f req-76f82a0c-a3b6-474a-baf3-6d723a259b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Refreshing network info cache for port 6c1cb38b-9fde-458f-a36b-d1c95b04690c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.852 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Start _get_guest_xml network_info=[{"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.871 189300 WARNING nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.882 189300 DEBUG nova.virt.libvirt.host [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.883 189300 DEBUG nova.virt.libvirt.host [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.887 189300 DEBUG nova.virt.libvirt.host [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.888 189300 DEBUG nova.virt.libvirt.host [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.888 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.889 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.889 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.889 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.890 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.890 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.890 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.891 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.891 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.891 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.891 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.892 189300 DEBUG nova.virt.hardware [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.895 189300 DEBUG nova.virt.libvirt.vif [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1437168499',display_name='tempest-ServersTestJSON-server-1437168499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1437168499',id=8,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/spFmmDJn4hjiHto1O3HEG2lkmBxJ1SpHsrnNRxtEsV94pTDKBIisMSCAnYO3VLsMYl/ToKwmIRk9h56powWNIToqHeQAHPP2PdDFOueNrXgNE2YIBmYZhrVq8QAqSxQ==',key_name='tempest-keypair-1089872028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e27f3ae6d694d7ca975b778b997e12f',ramdisk_id='',reservation_id='r-072h272w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1480213909',owner_user_name='tempest-ServersTestJSON-1480213909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:17:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bbe93898827d4d57a49114a72388c0ab',uuid=c0b50299-41b1-48cf-b075-08ca569a1bd5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.895 189300 DEBUG nova.network.os_vif_util [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Converting VIF {"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.896 189300 DEBUG nova.network.os_vif_util [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:34:7e,bridge_name='br-int',has_traffic_filtering=True,id=6c1cb38b-9fde-458f-a36b-d1c95b04690c,network=Network(970caef7-c556-4054-b603-3084ef389d78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c1cb38b-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.897 189300 DEBUG nova.objects.instance [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lazy-loading 'pci_devices' on Instance uuid c0b50299-41b1-48cf-b075-08ca569a1bd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.915 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <uuid>c0b50299-41b1-48cf-b075-08ca569a1bd5</uuid>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <name>instance-00000008</name>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <nova:name>tempest-ServersTestJSON-server-1437168499</nova:name>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:18:05</nova:creationTime>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:18:05 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:18:05 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:18:05 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:18:05 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:18:05 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:18:05 compute-0 nova_compute[189296]:        <nova:user uuid="bbe93898827d4d57a49114a72388c0ab">tempest-ServersTestJSON-1480213909-project-member</nova:user>
Nov 28 18:18:05 compute-0 nova_compute[189296]:        <nova:project uuid="4e27f3ae6d694d7ca975b778b997e12f">tempest-ServersTestJSON-1480213909</nova:project>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="ffec9e61-65fb-46ae-8d34-338639229ec3"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:18:05 compute-0 nova_compute[189296]:        <nova:port uuid="6c1cb38b-9fde-458f-a36b-d1c95b04690c">
Nov 28 18:18:05 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <system>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <entry name="serial">c0b50299-41b1-48cf-b075-08ca569a1bd5</entry>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <entry name="uuid">c0b50299-41b1-48cf-b075-08ca569a1bd5</entry>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    </system>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <os>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  </os>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <features>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  </features>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk.config"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:1c:34:7e"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <target dev="tap6c1cb38b-9f"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/console.log" append="off"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <video>
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    </video>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:18:05 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:18:05 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:18:05 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:18:05 compute-0 nova_compute[189296]: </domain>
Nov 28 18:18:05 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.916 189300 DEBUG nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Preparing to wait for external event network-vif-plugged-6c1cb38b-9fde-458f-a36b-d1c95b04690c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.917 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquiring lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.917 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.917 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.918 189300 DEBUG nova.virt.libvirt.vif [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1437168499',display_name='tempest-ServersTestJSON-server-1437168499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1437168499',id=8,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/spFmmDJn4hjiHto1O3HEG2lkmBxJ1SpHsrnNRxtEsV94pTDKBIisMSCAnYO3VLsMYl/ToKwmIRk9h56powWNIToqHeQAHPP2PdDFOueNrXgNE2YIBmYZhrVq8QAqSxQ==',key_name='tempest-keypair-1089872028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e27f3ae6d694d7ca975b778b997e12f',ramdisk_id='',reservation_id='r-072h272w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1480213909',owner_user_name='tempest-ServersTestJSON-1480213909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:17:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bbe93898827d4d57a49114a72388c0ab',uuid=c0b50299-41b1-48cf-b075-08ca569a1bd5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.919 189300 DEBUG nova.network.os_vif_util [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Converting VIF {"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.919 189300 DEBUG nova.network.os_vif_util [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:34:7e,bridge_name='br-int',has_traffic_filtering=True,id=6c1cb38b-9fde-458f-a36b-d1c95b04690c,network=Network(970caef7-c556-4054-b603-3084ef389d78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c1cb38b-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.920 189300 DEBUG os_vif [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:34:7e,bridge_name='br-int',has_traffic_filtering=True,id=6c1cb38b-9fde-458f-a36b-d1c95b04690c,network=Network(970caef7-c556-4054-b603-3084ef389d78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c1cb38b-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.921 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.921 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.922 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.924 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.925 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c1cb38b-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.925 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c1cb38b-9f, col_values=(('external_ids', {'iface-id': '6c1cb38b-9fde-458f-a36b-d1c95b04690c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:34:7e', 'vm-uuid': 'c0b50299-41b1-48cf-b075-08ca569a1bd5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.927 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:05 compute-0 NetworkManager[56307]: <info>  [1764353885.9295] manager: (tap6c1cb38b-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.929 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.936 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:05 compute-0 nova_compute[189296]: 2025-11-28 18:18:05.937 189300 INFO os_vif [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:34:7e,bridge_name='br-int',has_traffic_filtering=True,id=6c1cb38b-9fde-458f-a36b-d1c95b04690c,network=Network(970caef7-c556-4054-b603-3084ef389d78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c1cb38b-9f')#033[00m
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.193 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.194 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.195 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] No VIF found with MAC fa:16:3e:1c:34:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.195 189300 INFO nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Using config drive#033[00m
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.564 189300 INFO nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Creating config drive at /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk.config#033[00m
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.574 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_3k6g2ho execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.710 189300 DEBUG oslo_concurrency.processutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_3k6g2ho" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:06 compute-0 kernel: tap6c1cb38b-9f: entered promiscuous mode
Nov 28 18:18:06 compute-0 NetworkManager[56307]: <info>  [1764353886.7996] manager: (tap6c1cb38b-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.800 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:06 compute-0 ovn_controller[97771]: 2025-11-28T18:18:06Z|00085|binding|INFO|Claiming lport 6c1cb38b-9fde-458f-a36b-d1c95b04690c for this chassis.
Nov 28 18:18:06 compute-0 ovn_controller[97771]: 2025-11-28T18:18:06Z|00086|binding|INFO|6c1cb38b-9fde-458f-a36b-d1c95b04690c: Claiming fa:16:3e:1c:34:7e 10.100.0.7
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.805 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:06 compute-0 systemd-machined[155703]: New machine qemu-9-instance-00000008.
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.858 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:06 compute-0 ovn_controller[97771]: 2025-11-28T18:18:06Z|00087|binding|INFO|Setting lport 6c1cb38b-9fde-458f-a36b-d1c95b04690c ovn-installed in OVS
Nov 28 18:18:06 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000008.
Nov 28 18:18:06 compute-0 nova_compute[189296]: 2025-11-28 18:18:06.863 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:06 compute-0 systemd-udevd[248478]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:18:06 compute-0 ovn_controller[97771]: 2025-11-28T18:18:06Z|00088|binding|INFO|Setting lport 6c1cb38b-9fde-458f-a36b-d1c95b04690c up in Southbound
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.884 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:34:7e 10.100.0.7'], port_security=['fa:16:3e:1c:34:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c0b50299-41b1-48cf-b075-08ca569a1bd5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970caef7-c556-4054-b603-3084ef389d78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e27f3ae6d694d7ca975b778b997e12f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '670fcc8d-5461-45d8-a8b3-a1faeaa2cc9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb12d81a-6f7d-4391-9557-40e0910d6d06, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=6c1cb38b-9fde-458f-a36b-d1c95b04690c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.885 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 6c1cb38b-9fde-458f-a36b-d1c95b04690c in datapath 970caef7-c556-4054-b603-3084ef389d78 bound to our chassis#033[00m
Nov 28 18:18:06 compute-0 NetworkManager[56307]: <info>  [1764353886.8887] device (tap6c1cb38b-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.888 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 970caef7-c556-4054-b603-3084ef389d78#033[00m
Nov 28 18:18:06 compute-0 NetworkManager[56307]: <info>  [1764353886.8929] device (tap6c1cb38b-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.900 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[59f2e324-3f9c-4f38-b3aa-fc774160a8bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.902 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap970caef7-c1 in ovnmeta-970caef7-c556-4054-b603-3084ef389d78 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.903 238909 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap970caef7-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.903 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f885bd-e844-4acb-b610-d1420d5b59fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.904 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[f647a94f-6a78-499f-b8e3-0b595d18efdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.916 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[4bad26f0-065d-42b9-9efc-6ca93a0c5bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.931 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca66597-0264-44a4-8cc9-775dd848f56c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.969 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[be37a18d-e4a5-42c1-9851-145d9bda8c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:06.980 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d078dd31-07f6-4e64-b23e-a72d10e0ee35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:06 compute-0 NetworkManager[56307]: <info>  [1764353886.9821] manager: (tap970caef7-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Nov 28 18:18:06 compute-0 systemd-udevd[248480]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.018 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[33e011f5-af4f-4166-84ce-03c6a9903be8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.022 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6eebbc-7a50-4b5a-8c3e-739d2109f032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:07 compute-0 NetworkManager[56307]: <info>  [1764353887.0464] device (tap970caef7-c0): carrier: link connected
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.054 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2f747a-ed6a-4d25-94dc-be40c18302ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.076 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[36ba6b13-15bb-4cef-8e9c-83c0c2fe8de3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap970caef7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:f9:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503264, 'reachable_time': 38923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248513, 'error': None, 'target': 'ovnmeta-970caef7-c556-4054-b603-3084ef389d78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.090 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[829156a4-3df9-470f-ac38-13e44f567953]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:f994'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503264, 'tstamp': 503264}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248514, 'error': None, 'target': 'ovnmeta-970caef7-c556-4054-b603-3084ef389d78', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.107 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[57732d01-6346-4b41-8eb5-657f2463f945]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap970caef7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:f9:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503264, 'reachable_time': 38923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248515, 'error': None, 'target': 'ovnmeta-970caef7-c556-4054-b603-3084ef389d78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.136 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2f8735-305a-4edc-9035-0c9a9038d990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.191 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d8dd0451-d563-41a3-9c27-7de6c8a867c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.192 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970caef7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.192 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.193 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap970caef7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.194 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:07 compute-0 NetworkManager[56307]: <info>  [1764353887.1957] manager: (tap970caef7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 28 18:18:07 compute-0 kernel: tap970caef7-c0: entered promiscuous mode
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.200 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap970caef7-c0, col_values=(('external_ids', {'iface-id': 'b8201a63-3ccc-4661-a145-e0b355d53c38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.201 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:07 compute-0 ovn_controller[97771]: 2025-11-28T18:18:07Z|00089|binding|INFO|Releasing lport b8201a63-3ccc-4661-a145-e0b355d53c38 from this chassis (sb_readonly=0)
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.227 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.229 106624 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/970caef7-c556-4054-b603-3084ef389d78.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/970caef7-c556-4054-b603-3084ef389d78.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.231 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[64b8dc58-0718-44e5-8ceb-7ec050f8eb48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.232 106624 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: global
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    log         /dev/log local0 debug
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    log-tag     haproxy-metadata-proxy-970caef7-c556-4054-b603-3084ef389d78
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    user        root
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    group       root
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    maxconn     1024
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    pidfile     /var/lib/neutron/external/pids/970caef7-c556-4054-b603-3084ef389d78.pid.haproxy
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    daemon
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: defaults
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    log global
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    mode http
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    option httplog
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    option dontlognull
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    option http-server-close
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    option forwardfor
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    retries                 3
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    timeout http-request    30s
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    timeout connect         30s
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    timeout client          32s
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    timeout server          32s
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    timeout http-keep-alive 30s
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: listen listener
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    bind 169.254.169.254:80
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    server metadata /var/lib/neutron/metadata_proxy
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]:    http-request add-header X-OVN-Network-ID 970caef7-c556-4054-b603-3084ef389d78
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 28 18:18:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:07.233 106624 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-970caef7-c556-4054-b603-3084ef389d78', 'env', 'PROCESS_TAG=haproxy-970caef7-c556-4054-b603-3084ef389d78', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/970caef7-c556-4054-b603-3084ef389d78.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.472 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353887.4714801, c0b50299-41b1-48cf-b075-08ca569a1bd5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.473 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] VM Started (Lifecycle Event)#033[00m
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.507 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.514 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353887.4717944, c0b50299-41b1-48cf-b075-08ca569a1bd5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.515 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.600 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.605 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:18:07 compute-0 podman[248553]: 2025-11-28 18:18:07.608055804 +0000 UTC m=+0.062093814 container create 17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 18:18:07 compute-0 systemd[1]: Started libpod-conmon-17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755.scope.
Nov 28 18:18:07 compute-0 podman[248553]: 2025-11-28 18:18:07.571743644 +0000 UTC m=+0.025781664 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 18:18:07 compute-0 systemd[1]: Started libcrun container.
Nov 28 18:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b7817e478e51c054f20f7df687a25818fe9f7292db9f9d22bf7a1bc1881e25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 18:18:07 compute-0 podman[248553]: 2025-11-28 18:18:07.710208668 +0000 UTC m=+0.164246688 container init 17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 28 18:18:07 compute-0 podman[248553]: 2025-11-28 18:18:07.720525011 +0000 UTC m=+0.174563011 container start 17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 28 18:18:07 compute-0 neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78[248567]: [NOTICE]   (248571) : New worker (248573) forked
Nov 28 18:18:07 compute-0 neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78[248567]: [NOTICE]   (248571) : Loading success.
Nov 28 18:18:07 compute-0 nova_compute[189296]: 2025-11-28 18:18:07.877 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:18:08 compute-0 nova_compute[189296]: 2025-11-28 18:18:08.153 189300 DEBUG nova.network.neutron [req-2b3f88e4-4954-42c8-896b-d5aaf28b3c2f req-76f82a0c-a3b6-474a-baf3-6d723a259b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Updated VIF entry in instance network info cache for port 6c1cb38b-9fde-458f-a36b-d1c95b04690c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:18:08 compute-0 nova_compute[189296]: 2025-11-28 18:18:08.154 189300 DEBUG nova.network.neutron [req-2b3f88e4-4954-42c8-896b-d5aaf28b3c2f req-76f82a0c-a3b6-474a-baf3-6d723a259b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Updating instance_info_cache with network_info: [{"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:08 compute-0 nova_compute[189296]: 2025-11-28 18:18:08.207 189300 DEBUG oslo_concurrency.lockutils [req-2b3f88e4-4954-42c8-896b-d5aaf28b3c2f req-76f82a0c-a3b6-474a-baf3-6d723a259b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-c0b50299-41b1-48cf-b075-08ca569a1bd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:08 compute-0 nova_compute[189296]: 2025-11-28 18:18:08.581 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:10 compute-0 nova_compute[189296]: 2025-11-28 18:18:10.929 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:11 compute-0 nova_compute[189296]: 2025-11-28 18:18:11.637 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.004 189300 DEBUG nova.compute.manager [req-d48dbf1e-e69b-478f-a9fa-0a50a2b6e0ec req-0e76b1bb-053e-442d-ac52-0960d60f1dda 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Received event network-vif-plugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.005 189300 DEBUG oslo_concurrency.lockutils [req-d48dbf1e-e69b-478f-a9fa-0a50a2b6e0ec req-0e76b1bb-053e-442d-ac52-0960d60f1dda 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.006 189300 DEBUG oslo_concurrency.lockutils [req-d48dbf1e-e69b-478f-a9fa-0a50a2b6e0ec req-0e76b1bb-053e-442d-ac52-0960d60f1dda 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.006 189300 DEBUG oslo_concurrency.lockutils [req-d48dbf1e-e69b-478f-a9fa-0a50a2b6e0ec req-0e76b1bb-053e-442d-ac52-0960d60f1dda 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.006 189300 DEBUG nova.compute.manager [req-d48dbf1e-e69b-478f-a9fa-0a50a2b6e0ec req-0e76b1bb-053e-442d-ac52-0960d60f1dda 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Processing event network-vif-plugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.007 189300 DEBUG nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Instance event wait completed in 16 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.013 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.014 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353892.0136182, 9d9438df-a3bc-4004-95a3-0d76f449fe7e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.014 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.023 189300 INFO nova.virt.libvirt.driver [-] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Instance spawned successfully.#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.023 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.038 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:12 compute-0 podman[248583]: 2025-11-28 18:18:12.042267232 +0000 UTC m=+0.080745801 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.047 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.054 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.055 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.055 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.056 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.056 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:12 compute-0 podman[248584]: 2025-11-28 18:18:12.057593388 +0000 UTC m=+0.104498953 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.057 189300 DEBUG nova.virt.libvirt.driver [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.087 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:18:12 compute-0 podman[248582]: 2025-11-28 18:18:12.09360906 +0000 UTC m=+0.136150228 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, release=1755695350)
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.135 189300 INFO nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Took 29.41 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.136 189300 DEBUG nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.417 189300 INFO nova.compute.manager [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Took 30.13 seconds to build instance.#033[00m
Nov 28 18:18:12 compute-0 nova_compute[189296]: 2025-11-28 18:18:12.443 189300 DEBUG oslo_concurrency.lockutils [None req-c8c580f8-ee14-4d2d-9826-bf211d753048 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 30.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:13 compute-0 nova_compute[189296]: 2025-11-28 18:18:13.584 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.262 189300 DEBUG nova.compute.manager [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Received event network-vif-plugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.263 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.263 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.263 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.263 189300 DEBUG nova.compute.manager [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] No waiting events found dispatching network-vif-plugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.264 189300 WARNING nova.compute.manager [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Received unexpected event network-vif-plugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 for instance with vm_state active and task_state None.#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.264 189300 DEBUG nova.compute.manager [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received event network-vif-plugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.264 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.265 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.265 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.266 189300 DEBUG nova.compute.manager [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Processing event network-vif-plugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.266 189300 DEBUG nova.compute.manager [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received event network-vif-plugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.266 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.267 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.267 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.267 189300 DEBUG nova.compute.manager [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] No waiting events found dispatching network-vif-plugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.268 189300 WARNING nova.compute.manager [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received unexpected event network-vif-plugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 for instance with vm_state building and task_state spawning.#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.268 189300 DEBUG nova.compute.manager [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Received event network-vif-plugged-6c1cb38b-9fde-458f-a36b-d1c95b04690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.268 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.268 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.269 189300 DEBUG oslo_concurrency.lockutils [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.269 189300 DEBUG nova.compute.manager [req-92722e93-eb19-485d-8194-f76abc75aec3 req-04b3fccf-f66c-4b8c-9324-78388ed07394 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Processing event network-vif-plugged-6c1cb38b-9fde-458f-a36b-d1c95b04690c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.270 189300 DEBUG nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Instance event wait completed in 11 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.270 189300 DEBUG nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.275 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353894.275226, 1b9021c0-08c4-448d-9f6c-a589a543fb4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.275 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.278 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.285 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.290 189300 INFO nova.virt.libvirt.driver [-] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Instance spawned successfully.#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.290 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.292 189300 INFO nova.virt.libvirt.driver [-] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Instance spawned successfully.#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.292 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.298 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.304 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.324 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.325 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.326 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.327 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.327 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.328 189300 DEBUG nova.virt.libvirt.driver [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.333 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.334 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353894.2752802, c0b50299-41b1-48cf-b075-08ca569a1bd5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.335 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.339 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.340 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.340 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.341 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.341 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.342 189300 DEBUG nova.virt.libvirt.driver [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.386 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.392 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.423 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.436 189300 INFO nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Took 18.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.436 189300 DEBUG nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.452 189300 INFO nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Took 18.47 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.452 189300 DEBUG nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.553 189300 INFO nova.compute.manager [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Took 20.06 seconds to build instance.#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.567 189300 INFO nova.compute.manager [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Took 20.15 seconds to build instance.#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.586 189300 DEBUG oslo_concurrency.lockutils [None req-b9c2d120-8440-452f-90a2-f434f4f230bc f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.588 189300 DEBUG oslo_concurrency.lockutils [None req-12f2913e-8d7d-4fb8-a846-e899ffd79f5d bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:14 compute-0 nova_compute[189296]: 2025-11-28 18:18:14.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:15 compute-0 nova_compute[189296]: 2025-11-28 18:18:15.347 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:15 compute-0 NetworkManager[56307]: <info>  [1764353895.3486] manager: (patch-provnet-564e20d3-e524-48c8-993a-ae41282beadd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 28 18:18:15 compute-0 NetworkManager[56307]: <info>  [1764353895.3516] manager: (patch-br-int-to-provnet-564e20d3-e524-48c8-993a-ae41282beadd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 28 18:18:15 compute-0 nova_compute[189296]: 2025-11-28 18:18:15.417 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:15 compute-0 ovn_controller[97771]: 2025-11-28T18:18:15Z|00090|binding|INFO|Releasing lport b8201a63-3ccc-4661-a145-e0b355d53c38 from this chassis (sb_readonly=0)
Nov 28 18:18:15 compute-0 ovn_controller[97771]: 2025-11-28T18:18:15Z|00091|binding|INFO|Releasing lport 887c8718-c327-47ee-a268-31ddec78a450 from this chassis (sb_readonly=0)
Nov 28 18:18:15 compute-0 ovn_controller[97771]: 2025-11-28T18:18:15Z|00092|binding|INFO|Releasing lport c8eddf3b-1e0b-416b-ad1a-748f52f665f0 from this chassis (sb_readonly=0)
Nov 28 18:18:15 compute-0 nova_compute[189296]: 2025-11-28 18:18:15.438 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:15 compute-0 nova_compute[189296]: 2025-11-28 18:18:15.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:15 compute-0 nova_compute[189296]: 2025-11-28 18:18:15.632 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:18:15 compute-0 nova_compute[189296]: 2025-11-28 18:18:15.632 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:18:15 compute-0 nova_compute[189296]: 2025-11-28 18:18:15.932 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.132 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.133 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.133 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.134 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9d9438df-a3bc-4004-95a3-0d76f449fe7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.438 189300 DEBUG nova.compute.manager [req-79e0222f-e16a-4f94-859a-aeb7fdab3751 req-08068f87-dd55-4a8b-972a-8149a6e352c3 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Received event network-changed-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.439 189300 DEBUG nova.compute.manager [req-79e0222f-e16a-4f94-859a-aeb7fdab3751 req-08068f87-dd55-4a8b-972a-8149a6e352c3 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Refreshing instance network info cache due to event network-changed-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.439 189300 DEBUG oslo_concurrency.lockutils [req-79e0222f-e16a-4f94-859a-aeb7fdab3751 req-08068f87-dd55-4a8b-972a-8149a6e352c3 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.989 189300 DEBUG nova.compute.manager [req-904fcb61-fcad-48c3-8b73-5f3d03b11ab7 req-6ef70a66-1d52-41f6-8991-c1dccf675291 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Received event network-vif-plugged-6c1cb38b-9fde-458f-a36b-d1c95b04690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.989 189300 DEBUG oslo_concurrency.lockutils [req-904fcb61-fcad-48c3-8b73-5f3d03b11ab7 req-6ef70a66-1d52-41f6-8991-c1dccf675291 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.990 189300 DEBUG oslo_concurrency.lockutils [req-904fcb61-fcad-48c3-8b73-5f3d03b11ab7 req-6ef70a66-1d52-41f6-8991-c1dccf675291 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.990 189300 DEBUG oslo_concurrency.lockutils [req-904fcb61-fcad-48c3-8b73-5f3d03b11ab7 req-6ef70a66-1d52-41f6-8991-c1dccf675291 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.990 189300 DEBUG nova.compute.manager [req-904fcb61-fcad-48c3-8b73-5f3d03b11ab7 req-6ef70a66-1d52-41f6-8991-c1dccf675291 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] No waiting events found dispatching network-vif-plugged-6c1cb38b-9fde-458f-a36b-d1c95b04690c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:18:16 compute-0 nova_compute[189296]: 2025-11-28 18:18:16.990 189300 WARNING nova.compute.manager [req-904fcb61-fcad-48c3-8b73-5f3d03b11ab7 req-6ef70a66-1d52-41f6-8991-c1dccf675291 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Received unexpected event network-vif-plugged-6c1cb38b-9fde-458f-a36b-d1c95b04690c for instance with vm_state active and task_state None.#033[00m
Nov 28 18:18:17 compute-0 podman[248642]: 2025-11-28 18:18:17.011222019 +0000 UTC m=+0.075534223 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 28 18:18:17 compute-0 podman[248643]: 2025-11-28 18:18:17.024210997 +0000 UTC m=+0.086618234 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.724 189300 DEBUG oslo_concurrency.lockutils [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.725 189300 DEBUG oslo_concurrency.lockutils [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.725 189300 DEBUG oslo_concurrency.lockutils [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.726 189300 DEBUG oslo_concurrency.lockutils [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.726 189300 DEBUG oslo_concurrency.lockutils [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.727 189300 INFO nova.compute.manager [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Terminating instance#033[00m
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.728 189300 DEBUG nova.compute.manager [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:18:17 compute-0 kernel: tap0c9a98c5-1b (unregistering): left promiscuous mode
Nov 28 18:18:17 compute-0 NetworkManager[56307]: <info>  [1764353897.7646] device (tap0c9a98c5-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.775 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:17 compute-0 ovn_controller[97771]: 2025-11-28T18:18:17Z|00093|binding|INFO|Releasing lport 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 from this chassis (sb_readonly=0)
Nov 28 18:18:17 compute-0 ovn_controller[97771]: 2025-11-28T18:18:17Z|00094|binding|INFO|Setting lport 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 down in Southbound
Nov 28 18:18:17 compute-0 ovn_controller[97771]: 2025-11-28T18:18:17Z|00095|binding|INFO|Removing iface tap0c9a98c5-1b ovn-installed in OVS
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.779 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:17.785 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:73:08 10.100.0.9'], port_security=['fa:16:3e:84:73:08 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9d9438df-a3bc-4004-95a3-0d76f449fe7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87bc234-f5cf-4903-8735-1e50c5da2392', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb27a9d222b44ca3a79da5ce054611e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ed42db5-cc07-4ced-9aa8-8eb1c68cde2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc56f1c9-cc2d-473f-b3d6-7ae98cc4845e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:18:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:17.791 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 in datapath e87bc234-f5cf-4903-8735-1e50c5da2392 unbound from our chassis#033[00m
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.794 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:17.801 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e87bc234-f5cf-4903-8735-1e50c5da2392, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:18:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:17.803 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e56785-ac15-4cdc-a33e-bd78de001811]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:17.804 106624 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392 namespace which is not needed anymore#033[00m
Nov 28 18:18:17 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 28 18:18:17 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 6.177s CPU time.
Nov 28 18:18:17 compute-0 systemd-machined[155703]: Machine qemu-7-instance-00000007 terminated.
Nov 28 18:18:17 compute-0 neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392[248258]: [NOTICE]   (248268) : haproxy version is 2.8.14-c23fe91
Nov 28 18:18:17 compute-0 neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392[248258]: [NOTICE]   (248268) : path to executable is /usr/sbin/haproxy
Nov 28 18:18:17 compute-0 neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392[248258]: [WARNING]  (248268) : Exiting Master process...
Nov 28 18:18:17 compute-0 neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392[248258]: [ALERT]    (248268) : Current worker (248273) exited with code 143 (Terminated)
Nov 28 18:18:17 compute-0 neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392[248258]: [WARNING]  (248268) : All workers exited. Exiting... (0)
Nov 28 18:18:17 compute-0 kernel: tap0c9a98c5-1b: entered promiscuous mode
Nov 28 18:18:17 compute-0 systemd[1]: libpod-658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7.scope: Deactivated successfully.
Nov 28 18:18:17 compute-0 podman[248703]: 2025-11-28 18:18:17.955928518 +0000 UTC m=+0.056028404 container died 658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:18:17 compute-0 kernel: tap0c9a98c5-1b (unregistering): left promiscuous mode
Nov 28 18:18:17 compute-0 NetworkManager[56307]: <info>  [1764353897.9618] manager: (tap0c9a98c5-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Nov 28 18:18:17 compute-0 ovn_controller[97771]: 2025-11-28T18:18:17Z|00096|binding|INFO|Claiming lport 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 for this chassis.
Nov 28 18:18:17 compute-0 ovn_controller[97771]: 2025-11-28T18:18:17Z|00097|binding|INFO|0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98: Claiming fa:16:3e:84:73:08 10.100.0.9
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.966 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:17 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:17.981 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:73:08 10.100.0.9'], port_security=['fa:16:3e:84:73:08 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9d9438df-a3bc-4004-95a3-0d76f449fe7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87bc234-f5cf-4903-8735-1e50c5da2392', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb27a9d222b44ca3a79da5ce054611e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ed42db5-cc07-4ced-9aa8-8eb1c68cde2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc56f1c9-cc2d-473f-b3d6-7ae98cc4845e, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:18:17 compute-0 ovn_controller[97771]: 2025-11-28T18:18:17Z|00098|binding|INFO|Releasing lport 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 from this chassis (sb_readonly=0)
Nov 28 18:18:17 compute-0 nova_compute[189296]: 2025-11-28 18:18:17.992 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.009 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:73:08 10.100.0.9'], port_security=['fa:16:3e:84:73:08 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9d9438df-a3bc-4004-95a3-0d76f449fe7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e87bc234-f5cf-4903-8735-1e50c5da2392', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fb27a9d222b44ca3a79da5ce054611e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ed42db5-cc07-4ced-9aa8-8eb1c68cde2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc56f1c9-cc2d-473f-b3d6-7ae98cc4845e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:18:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7-userdata-shm.mount: Deactivated successfully.
Nov 28 18:18:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-469ebe7bc3eaf000933fbb07d2f451743a6185fd324064f80d4775274da65bdc-merged.mount: Deactivated successfully.
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.025 189300 INFO nova.virt.libvirt.driver [-] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Instance destroyed successfully.#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.026 189300 DEBUG nova.objects.instance [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lazy-loading 'resources' on Instance uuid 9d9438df-a3bc-4004-95a3-0d76f449fe7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:18 compute-0 podman[248703]: 2025-11-28 18:18:18.029968354 +0000 UTC m=+0.130068240 container cleanup 658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:18:18 compute-0 systemd[1]: libpod-conmon-658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7.scope: Deactivated successfully.
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.087 189300 DEBUG nova.virt.libvirt.vif [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-841468157',display_name='tempest-ServersTestManualDisk-server-841468157',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-841468157',id=7,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF0lU3reW+r6+CL4oiKiTJeTxvoYGtNnyZC7K2JFkFHBUYEDbAZx3apgSql2jHITUVC9Q5dSP2o1/FA3PKXjtRYzKuW2OQzECF5F4nGtMC9kKi5U05uhynuj7W2UehWBBw==',key_name='tempest-keypair-617998503',keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:18:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fb27a9d222b44ca3a79da5ce054611e5',ramdisk_id='',reservation_id='r-65xdvbo8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1661420842',owner_user_name='tempest-ServersTestManualDisk-1661420842-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:18:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='28259861c020436091f3ab3eb680fa5d',uuid=9d9438df-a3bc-4004-95a3-0d76f449fe7e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.088 189300 DEBUG nova.network.os_vif_util [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Converting VIF {"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.090 189300 DEBUG nova.network.os_vif_util [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:08,bridge_name='br-int',has_traffic_filtering=True,id=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98,network=Network(e87bc234-f5cf-4903-8735-1e50c5da2392),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c9a98c5-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.092 189300 DEBUG os_vif [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:08,bridge_name='br-int',has_traffic_filtering=True,id=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98,network=Network(e87bc234-f5cf-4903-8735-1e50c5da2392),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c9a98c5-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.093 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.094 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c9a98c5-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.095 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.096 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.098 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.101 189300 INFO os_vif [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:08,bridge_name='br-int',has_traffic_filtering=True,id=0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98,network=Network(e87bc234-f5cf-4903-8735-1e50c5da2392),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c9a98c5-1b')#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.102 189300 INFO nova.virt.libvirt.driver [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Deleting instance files /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e_del#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.103 189300 INFO nova.virt.libvirt.driver [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Deletion of /var/lib/nova/instances/9d9438df-a3bc-4004-95a3-0d76f449fe7e_del complete#033[00m
Nov 28 18:18:18 compute-0 podman[248750]: 2025-11-28 18:18:18.117038868 +0000 UTC m=+0.058350361 container remove 658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.125 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7bf0a9-a912-4292-9ca7-131dc82a4671]: (4, ('Fri Nov 28 06:18:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392 (658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7)\n658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7\nFri Nov 28 06:18:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392 (658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7)\n658192dac53e302db54c6e470810ed9404340f10f0934250bab375c06d1471e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.127 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[3e50f094-9bda-4977-a38b-df74aebbb0f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.128 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87bc234-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.130 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:18 compute-0 kernel: tape87bc234-f0: left promiscuous mode
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.144 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.146 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[286e799c-d86b-4eb8-bd30-61d0e1e967ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.157 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[dc210885-3d9a-467b-9e26-f5f83131c1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.159 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[76c66d68-8cb4-4b42-bdb3-ccf474fecc4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:18 compute-0 podman[248751]: 2025-11-28 18:18:18.173747159 +0000 UTC m=+0.091161076 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.179 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[2feaf8e2-d536-4b04-ad41-d6eac32156d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502103, 'reachable_time': 18363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248797, 'error': None, 'target': 'ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.183 106734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e87bc234-f5cf-4903-8735-1e50c5da2392 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.183 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1c35fa-0201-46ef-9aa3-9fe2fd8324e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.184 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 in datapath e87bc234-f5cf-4903-8735-1e50c5da2392 unbound from our chassis#033[00m
Nov 28 18:18:18 compute-0 systemd[1]: run-netns-ovnmeta\x2de87bc234\x2df5cf\x2d4903\x2d8735\x2d1e50c5da2392.mount: Deactivated successfully.
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.185 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e87bc234-f5cf-4903-8735-1e50c5da2392, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.186 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea0080d-dbea-4086-8bc6-f1fc6fc0500f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.186 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 in datapath e87bc234-f5cf-4903-8735-1e50c5da2392 unbound from our chassis#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.187 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e87bc234-f5cf-4903-8735-1e50c5da2392, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:18:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:18.188 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[42ad9ab2-e264-4e2c-b240-5222533cc65b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:18 compute-0 podman[248758]: 2025-11-28 18:18:18.196083757 +0000 UTC m=+0.113533635 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, version=9.4, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.209 189300 INFO nova.compute.manager [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.210 189300 DEBUG oslo.service.loopingcall [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.210 189300 DEBUG nova.compute.manager [-] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.210 189300 DEBUG nova.network.neutron [-] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.285 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Updating instance_info_cache with network_info: [{"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.318 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.319 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.319 189300 DEBUG oslo_concurrency.lockutils [req-79e0222f-e16a-4f94-859a-aeb7fdab3751 req-08068f87-dd55-4a8b-972a-8149a6e352c3 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.319 189300 DEBUG nova.network.neutron [req-79e0222f-e16a-4f94-859a-aeb7fdab3751 req-08068f87-dd55-4a8b-972a-8149a6e352c3 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Refreshing network info cache for port 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.320 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.585 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:18 compute-0 nova_compute[189296]: 2025-11-28 18:18:18.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.315 189300 DEBUG nova.compute.manager [req-f9810c35-3c80-4749-adbe-6d8dace4d107 req-4718b9c3-fc9f-4298-8c60-1748b669918e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Received event network-vif-unplugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.316 189300 DEBUG oslo_concurrency.lockutils [req-f9810c35-3c80-4749-adbe-6d8dace4d107 req-4718b9c3-fc9f-4298-8c60-1748b669918e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.317 189300 DEBUG oslo_concurrency.lockutils [req-f9810c35-3c80-4749-adbe-6d8dace4d107 req-4718b9c3-fc9f-4298-8c60-1748b669918e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.318 189300 DEBUG oslo_concurrency.lockutils [req-f9810c35-3c80-4749-adbe-6d8dace4d107 req-4718b9c3-fc9f-4298-8c60-1748b669918e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.318 189300 DEBUG nova.compute.manager [req-f9810c35-3c80-4749-adbe-6d8dace4d107 req-4718b9c3-fc9f-4298-8c60-1748b669918e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] No waiting events found dispatching network-vif-unplugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.319 189300 DEBUG nova.compute.manager [req-f9810c35-3c80-4749-adbe-6d8dace4d107 req-4718b9c3-fc9f-4298-8c60-1748b669918e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Received event network-vif-unplugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.628 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.628 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.794 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.934 189300 DEBUG nova.network.neutron [-] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:19 compute-0 nova_compute[189296]: 2025-11-28 18:18:19.963 189300 INFO nova.compute.manager [-] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Took 1.75 seconds to deallocate network for instance.#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.012 189300 DEBUG nova.compute.manager [req-5aaef352-c68a-4881-88be-794460d723f9 req-c4dfc5ab-f3ca-4803-8c04-cda722625fb1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Received event network-vif-deleted-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.014 189300 DEBUG oslo_concurrency.lockutils [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.015 189300 DEBUG oslo_concurrency.lockutils [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.297 189300 DEBUG nova.compute.provider_tree [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.313 189300 DEBUG nova.scheduler.client.report [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.336 189300 DEBUG oslo_concurrency.lockutils [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.358 189300 INFO nova.scheduler.client.report [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Deleted allocations for instance 9d9438df-a3bc-4004-95a3-0d76f449fe7e#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.435 189300 DEBUG oslo_concurrency.lockutils [None req-23df529e-2710-4342-b709-ec5082387d18 28259861c020436091f3ab3eb680fa5d fb27a9d222b44ca3a79da5ce054611e5 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.701 189300 DEBUG nova.network.neutron [req-79e0222f-e16a-4f94-859a-aeb7fdab3751 req-08068f87-dd55-4a8b-972a-8149a6e352c3 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Updated VIF entry in instance network info cache for port 0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.702 189300 DEBUG nova.network.neutron [req-79e0222f-e16a-4f94-859a-aeb7fdab3751 req-08068f87-dd55-4a8b-972a-8149a6e352c3 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Updating instance_info_cache with network_info: [{"id": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "address": "fa:16:3e:84:73:08", "network": {"id": "e87bc234-f5cf-4903-8735-1e50c5da2392", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-967785827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fb27a9d222b44ca3a79da5ce054611e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c9a98c5-1b", "ovs_interfaceid": "0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.720 189300 DEBUG oslo_concurrency.lockutils [req-79e0222f-e16a-4f94-859a-aeb7fdab3751 req-08068f87-dd55-4a8b-972a-8149a6e352c3 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-9d9438df-a3bc-4004-95a3-0d76f449fe7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:20 compute-0 nova_compute[189296]: 2025-11-28 18:18:20.990 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.052 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Triggering sync for uuid c0b50299-41b1-48cf-b075-08ca569a1bd5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.053 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Triggering sync for uuid 1b9021c0-08c4-448d-9f6c-a589a543fb4c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.053 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "c0b50299-41b1-48cf-b075-08ca569a1bd5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.054 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.055 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.056 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.133 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.144 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.658 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.658 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.659 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.659 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.781 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:21 compute-0 podman[248804]: 2025-11-28 18:18:21.814280809 +0000 UTC m=+0.110997342 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller)
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.849 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.850 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.887 189300 DEBUG nova.compute.manager [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Received event network-vif-plugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.888 189300 DEBUG oslo_concurrency.lockutils [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.889 189300 DEBUG oslo_concurrency.lockutils [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.889 189300 DEBUG oslo_concurrency.lockutils [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "9d9438df-a3bc-4004-95a3-0d76f449fe7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.890 189300 DEBUG nova.compute.manager [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] No waiting events found dispatching network-vif-plugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.890 189300 WARNING nova.compute.manager [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Received unexpected event network-vif-plugged-0c9a98c5-1bfc-4c4e-a54f-bb5e71e41d98 for instance with vm_state deleted and task_state None.#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.890 189300 DEBUG nova.compute.manager [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Received event network-changed-6c1cb38b-9fde-458f-a36b-d1c95b04690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.891 189300 DEBUG nova.compute.manager [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Refreshing instance network info cache due to event network-changed-6c1cb38b-9fde-458f-a36b-d1c95b04690c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.891 189300 DEBUG oslo_concurrency.lockutils [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-c0b50299-41b1-48cf-b075-08ca569a1bd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.892 189300 DEBUG oslo_concurrency.lockutils [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-c0b50299-41b1-48cf-b075-08ca569a1bd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.892 189300 DEBUG nova.network.neutron [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Refreshing network info cache for port 6c1cb38b-9fde-458f-a36b-d1c95b04690c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.911 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.924 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.983 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:21 compute-0 nova_compute[189296]: 2025-11-28 18:18:21.984 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.047 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.220 189300 DEBUG oslo_concurrency.lockutils [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquiring lock "c0b50299-41b1-48cf-b075-08ca569a1bd5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.221 189300 DEBUG oslo_concurrency.lockutils [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.221 189300 DEBUG oslo_concurrency.lockutils [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquiring lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.222 189300 DEBUG oslo_concurrency.lockutils [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.222 189300 DEBUG oslo_concurrency.lockutils [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.224 189300 INFO nova.compute.manager [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Terminating instance#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.225 189300 DEBUG nova.compute.manager [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:18:22 compute-0 kernel: tap6c1cb38b-9f (unregistering): left promiscuous mode
Nov 28 18:18:22 compute-0 NetworkManager[56307]: <info>  [1764353902.2644] device (tap6c1cb38b-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:18:22 compute-0 virtnodedevd[189596]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 28 18:18:22 compute-0 virtnodedevd[189596]: hostname: compute-0
Nov 28 18:18:22 compute-0 virtnodedevd[189596]: ethtool ioctl error on tap6c1cb38b-9f: No such device
Nov 28 18:18:22 compute-0 ovn_controller[97771]: 2025-11-28T18:18:22Z|00099|binding|INFO|Releasing lport 6c1cb38b-9fde-458f-a36b-d1c95b04690c from this chassis (sb_readonly=0)
Nov 28 18:18:22 compute-0 ovn_controller[97771]: 2025-11-28T18:18:22Z|00100|binding|INFO|Setting lport 6c1cb38b-9fde-458f-a36b-d1c95b04690c down in Southbound
Nov 28 18:18:22 compute-0 ovn_controller[97771]: 2025-11-28T18:18:22Z|00101|binding|INFO|Removing iface tap6c1cb38b-9f ovn-installed in OVS
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.273 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.282 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:34:7e 10.100.0.7'], port_security=['fa:16:3e:1c:34:7e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c0b50299-41b1-48cf-b075-08ca569a1bd5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970caef7-c556-4054-b603-3084ef389d78', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e27f3ae6d694d7ca975b778b997e12f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '670fcc8d-5461-45d8-a8b3-a1faeaa2cc9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb12d81a-6f7d-4391-9557-40e0910d6d06, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=6c1cb38b-9fde-458f-a36b-d1c95b04690c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.284 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 6c1cb38b-9fde-458f-a36b-d1c95b04690c in datapath 970caef7-c556-4054-b603-3084ef389d78 unbound from our chassis#033[00m
Nov 28 18:18:22 compute-0 virtnodedevd[189596]: ethtool ioctl error on tap6c1cb38b-9f: No such device
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.286 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 970caef7-c556-4054-b603-3084ef389d78, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.287 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[83d942be-319e-4004-a01a-e4ddd85ea336]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.288 106624 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-970caef7-c556-4054-b603-3084ef389d78 namespace which is not needed anymore#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.291 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:22 compute-0 virtnodedevd[189596]: ethtool ioctl error on tap6c1cb38b-9f: No such device
Nov 28 18:18:22 compute-0 virtnodedevd[189596]: ethtool ioctl error on tap6c1cb38b-9f: No such device
Nov 28 18:18:22 compute-0 virtnodedevd[189596]: ethtool ioctl error on tap6c1cb38b-9f: No such device
Nov 28 18:18:22 compute-0 virtnodedevd[189596]: ethtool ioctl error on tap6c1cb38b-9f: No such device
Nov 28 18:18:22 compute-0 virtnodedevd[189596]: ethtool ioctl error on tap6c1cb38b-9f: No such device
Nov 28 18:18:22 compute-0 virtnodedevd[189596]: ethtool ioctl error on tap6c1cb38b-9f: No such device
Nov 28 18:18:22 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 28 18:18:22 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Consumed 8.543s CPU time.
Nov 28 18:18:22 compute-0 systemd-machined[155703]: Machine qemu-9-instance-00000008 terminated.
Nov 28 18:18:22 compute-0 NetworkManager[56307]: <info>  [1764353902.4475] manager: (tap6c1cb38b-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Nov 28 18:18:22 compute-0 systemd-udevd[248854]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:18:22 compute-0 kernel: tap6c1cb38b-9f: entered promiscuous mode
Nov 28 18:18:22 compute-0 kernel: tap6c1cb38b-9f (unregistering): left promiscuous mode
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.461 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:22 compute-0 neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78[248567]: [NOTICE]   (248571) : haproxy version is 2.8.14-c23fe91
Nov 28 18:18:22 compute-0 neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78[248567]: [NOTICE]   (248571) : path to executable is /usr/sbin/haproxy
Nov 28 18:18:22 compute-0 neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78[248567]: [WARNING]  (248571) : Exiting Master process...
Nov 28 18:18:22 compute-0 neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78[248567]: [ALERT]    (248571) : Current worker (248573) exited with code 143 (Terminated)
Nov 28 18:18:22 compute-0 neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78[248567]: [WARNING]  (248571) : All workers exited. Exiting... (0)
Nov 28 18:18:22 compute-0 systemd[1]: libpod-17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755.scope: Deactivated successfully.
Nov 28 18:18:22 compute-0 podman[248879]: 2025-11-28 18:18:22.476874252 +0000 UTC m=+0.072858567 container died 17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.505 189300 INFO nova.virt.libvirt.driver [-] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Instance destroyed successfully.#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.506 189300 DEBUG nova.objects.instance [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lazy-loading 'resources' on Instance uuid c0b50299-41b1-48cf-b075-08ca569a1bd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755-userdata-shm.mount: Deactivated successfully.
Nov 28 18:18:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-29b7817e478e51c054f20f7df687a25818fe9f7292db9f9d22bf7a1bc1881e25-merged.mount: Deactivated successfully.
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.534 189300 DEBUG nova.virt.libvirt.vif [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1437168499',display_name='tempest-ServersTestJSON-server-1437168499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1437168499',id=8,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/spFmmDJn4hjiHto1O3HEG2lkmBxJ1SpHsrnNRxtEsV94pTDKBIisMSCAnYO3VLsMYl/ToKwmIRk9h56powWNIToqHeQAHPP2PdDFOueNrXgNE2YIBmYZhrVq8QAqSxQ==',key_name='tempest-keypair-1089872028',keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:18:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e27f3ae6d694d7ca975b778b997e12f',ramdisk_id='',reservation_id='r-072h272w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1480213909',owner_user_name='tempest-ServersTestJSON-1480213909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:18:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bbe93898827d4d57a49114a72388c0ab',uuid=c0b50299-41b1-48cf-b075-08ca569a1bd5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.536 189300 DEBUG nova.network.os_vif_util [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Converting VIF {"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.537 189300 DEBUG nova.network.os_vif_util [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:34:7e,bridge_name='br-int',has_traffic_filtering=True,id=6c1cb38b-9fde-458f-a36b-d1c95b04690c,network=Network(970caef7-c556-4054-b603-3084ef389d78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c1cb38b-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.538 189300 DEBUG os_vif [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:34:7e,bridge_name='br-int',has_traffic_filtering=True,id=6c1cb38b-9fde-458f-a36b-d1c95b04690c,network=Network(970caef7-c556-4054-b603-3084ef389d78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c1cb38b-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.540 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:22 compute-0 podman[248879]: 2025-11-28 18:18:22.541827035 +0000 UTC m=+0.137811340 container cleanup 17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.541 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c1cb38b-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.546 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.548 189300 INFO os_vif [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:34:7e,bridge_name='br-int',has_traffic_filtering=True,id=6c1cb38b-9fde-458f-a36b-d1c95b04690c,network=Network(970caef7-c556-4054-b603-3084ef389d78),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c1cb38b-9f')#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.549 189300 INFO nova.virt.libvirt.driver [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Deleting instance files /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5_del#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.550 189300 INFO nova.virt.libvirt.driver [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Deletion of /var/lib/nova/instances/c0b50299-41b1-48cf-b075-08ca569a1bd5_del complete#033[00m
Nov 28 18:18:22 compute-0 systemd[1]: libpod-conmon-17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755.scope: Deactivated successfully.
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.595 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.597 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5049MB free_disk=72.34060668945312GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.598 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.598 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:22 compute-0 podman[248916]: 2025-11-28 18:18:22.622338249 +0000 UTC m=+0.051112564 container remove 17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.630 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[c062b58f-3618-484f-afd2-2dfddf0cb85e]: (4, ('Fri Nov 28 06:18:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78 (17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755)\n17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755\nFri Nov 28 06:18:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-970caef7-c556-4054-b603-3084ef389d78 (17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755)\n17447c863a0b1d1a985f887e326a073872aa0b4d4269f5b5086209b675c42755\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.632 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[af43c8ce-385b-4a77-a807-32cf6a146065]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.634 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970caef7-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.636 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:22 compute-0 kernel: tap970caef7-c0: left promiscuous mode
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.651 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.654 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.654 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[64727d1b-2650-4833-86d9-f251f93abfc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.670 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[bb999596-7ec9-457e-834d-9cb25f91f2be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.671 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[6aaeb491-ca1c-4b6b-ac01-99b36b7ca963]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.687 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[fc55932f-d712-4fe6-a843-4bd002d99ed4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503255, 'reachable_time': 16696, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248931, 'error': None, 'target': 'ovnmeta-970caef7-c556-4054-b603-3084ef389d78', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.690 106734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-970caef7-c556-4054-b603-3084ef389d78 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 28 18:18:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:22.690 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[3945a462-8509-4688-815c-ce1dd3e3267c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d970caef7\x2dc556\x2d4054\x2db603\x2d3084ef389d78.mount: Deactivated successfully.
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.843 189300 INFO nova.compute.manager [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.844 189300 DEBUG oslo.service.loopingcall [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.845 189300 DEBUG nova.compute.manager [-] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.846 189300 DEBUG nova.network.neutron [-] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.931 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance c0b50299-41b1-48cf-b075-08ca569a1bd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.932 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 1b9021c0-08c4-448d-9f6c-a589a543fb4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.933 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:18:22 compute-0 nova_compute[189296]: 2025-11-28 18:18:22.934 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:18:23 compute-0 nova_compute[189296]: 2025-11-28 18:18:23.000 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:18:23 compute-0 nova_compute[189296]: 2025-11-28 18:18:23.017 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:18:23 compute-0 nova_compute[189296]: 2025-11-28 18:18:23.051 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:18:23 compute-0 nova_compute[189296]: 2025-11-28 18:18:23.052 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:23 compute-0 nova_compute[189296]: 2025-11-28 18:18:23.588 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:23 compute-0 nova_compute[189296]: 2025-11-28 18:18:23.845 189300 DEBUG nova.network.neutron [-] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:23 compute-0 nova_compute[189296]: 2025-11-28 18:18:23.865 189300 INFO nova.compute.manager [-] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Took 1.02 seconds to deallocate network for instance.#033[00m
Nov 28 18:18:23 compute-0 nova_compute[189296]: 2025-11-28 18:18:23.909 189300 DEBUG oslo_concurrency.lockutils [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:23 compute-0 nova_compute[189296]: 2025-11-28 18:18:23.910 189300 DEBUG oslo_concurrency.lockutils [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:23 compute-0 nova_compute[189296]: 2025-11-28 18:18:23.921 189300 DEBUG nova.compute.manager [req-12e1c1be-d9f1-4dc7-9ef2-da4930463f03 req-777fd4fe-cf9e-48ac-acbe-a7948b170642 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Received event network-vif-deleted-6c1cb38b-9fde-458f-a36b-d1c95b04690c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:24 compute-0 nova_compute[189296]: 2025-11-28 18:18:24.004 189300 DEBUG nova.compute.provider_tree [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:18:24 compute-0 nova_compute[189296]: 2025-11-28 18:18:24.022 189300 DEBUG nova.scheduler.client.report [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:18:24 compute-0 nova_compute[189296]: 2025-11-28 18:18:24.050 189300 DEBUG oslo_concurrency.lockutils [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:24 compute-0 nova_compute[189296]: 2025-11-28 18:18:24.079 189300 INFO nova.scheduler.client.report [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Deleted allocations for instance c0b50299-41b1-48cf-b075-08ca569a1bd5#033[00m
Nov 28 18:18:24 compute-0 nova_compute[189296]: 2025-11-28 18:18:24.161 189300 DEBUG oslo_concurrency.lockutils [None req-ac4cc817-00f2-4060-8b9a-0773ac789c55 bbe93898827d4d57a49114a72388c0ab 4e27f3ae6d694d7ca975b778b997e12f - - default default] Lock "c0b50299-41b1-48cf-b075-08ca569a1bd5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.052 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.183 189300 DEBUG nova.network.neutron [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Updated VIF entry in instance network info cache for port 6c1cb38b-9fde-458f-a36b-d1c95b04690c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.184 189300 DEBUG nova.network.neutron [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Updating instance_info_cache with network_info: [{"id": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "address": "fa:16:3e:1c:34:7e", "network": {"id": "970caef7-c556-4054-b603-3084ef389d78", "bridge": "br-int", "label": "tempest-ServersTestJSON-467423109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e27f3ae6d694d7ca975b778b997e12f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c1cb38b-9f", "ovs_interfaceid": "6c1cb38b-9fde-458f-a36b-d1c95b04690c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.208 189300 DEBUG oslo_concurrency.lockutils [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-c0b50299-41b1-48cf-b075-08ca569a1bd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.208 189300 DEBUG nova.compute.manager [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received event network-changed-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.209 189300 DEBUG nova.compute.manager [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Refreshing instance network info cache due to event network-changed-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.209 189300 DEBUG oslo_concurrency.lockutils [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.210 189300 DEBUG oslo_concurrency.lockutils [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.210 189300 DEBUG nova.network.neutron [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Refreshing network info cache for port c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 28 18:18:25 compute-0 nova_compute[189296]: 2025-11-28 18:18:25.644 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 28 18:18:27 compute-0 nova_compute[189296]: 2025-11-28 18:18:27.057 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:27 compute-0 nova_compute[189296]: 2025-11-28 18:18:27.546 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:27 compute-0 nova_compute[189296]: 2025-11-28 18:18:27.640 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:18:28 compute-0 nova_compute[189296]: 2025-11-28 18:18:28.591 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:28 compute-0 nova_compute[189296]: 2025-11-28 18:18:28.699 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:28 compute-0 nova_compute[189296]: 2025-11-28 18:18:28.887 189300 DEBUG nova.network.neutron [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updated VIF entry in instance network info cache for port c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:18:28 compute-0 nova_compute[189296]: 2025-11-28 18:18:28.888 189300 DEBUG nova.network.neutron [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updating instance_info_cache with network_info: [{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:28 compute-0 nova_compute[189296]: 2025-11-28 18:18:28.909 189300 DEBUG oslo_concurrency.lockutils [req-c0a75f53-54e6-4bf5-99ab-36c0a76299dd req-8dd8d690-66e1-41e6-a94a-ac008ce75de2 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:29 compute-0 ovn_controller[97771]: 2025-11-28T18:18:29Z|00102|binding|INFO|Releasing lport c8eddf3b-1e0b-416b-ad1a-748f52f665f0 from this chassis (sb_readonly=0)
Nov 28 18:18:29 compute-0 nova_compute[189296]: 2025-11-28 18:18:29.485 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:29 compute-0 podman[203494]: time="2025-11-28T18:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:18:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:18:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4785 "" "Go-http-client/1.1"
Nov 28 18:18:31 compute-0 podman[248933]: 2025-11-28 18:18:31.03210927 +0000 UTC m=+0.092681424 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:18:31 compute-0 openstack_network_exporter[205632]: ERROR   18:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:18:31 compute-0 openstack_network_exporter[205632]: ERROR   18:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:18:31 compute-0 openstack_network_exporter[205632]: ERROR   18:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:18:31 compute-0 openstack_network_exporter[205632]: ERROR   18:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:18:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:18:31 compute-0 openstack_network_exporter[205632]: ERROR   18:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:18:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:18:31 compute-0 nova_compute[189296]: 2025-11-28 18:18:31.932 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:32 compute-0 nova_compute[189296]: 2025-11-28 18:18:32.548 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:33 compute-0 nova_compute[189296]: 2025-11-28 18:18:33.016 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764353898.0164757, 9d9438df-a3bc-4004-95a3-0d76f449fe7e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:33 compute-0 nova_compute[189296]: 2025-11-28 18:18:33.017 189300 INFO nova.compute.manager [-] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:18:33 compute-0 nova_compute[189296]: 2025-11-28 18:18:33.037 189300 DEBUG nova.compute.manager [None req-fdf426b4-3919-470c-8cf2-4160d5a9f951 - - - - - -] [instance: 9d9438df-a3bc-4004-95a3-0d76f449fe7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:33 compute-0 nova_compute[189296]: 2025-11-28 18:18:33.593 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:35.391 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:18:35 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:35.391 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:18:35 compute-0 nova_compute[189296]: 2025-11-28 18:18:35.393 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:35 compute-0 nova_compute[189296]: 2025-11-28 18:18:35.510 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:36 compute-0 ovn_controller[97771]: 2025-11-28T18:18:36Z|00103|binding|INFO|Releasing lport c8eddf3b-1e0b-416b-ad1a-748f52f665f0 from this chassis (sb_readonly=0)
Nov 28 18:18:36 compute-0 nova_compute[189296]: 2025-11-28 18:18:36.158 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.042 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquiring lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.043 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.066 189300 DEBUG nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.155 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.156 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.164 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.165 189300 INFO nova.compute.claims [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.274 189300 DEBUG nova.compute.provider_tree [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.290 189300 DEBUG nova.scheduler.client.report [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.312 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.313 189300 DEBUG nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.366 189300 DEBUG nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.367 189300 DEBUG nova.network.neutron [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.385 189300 INFO nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:18:37 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:37.393 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.403 189300 DEBUG nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.503 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764353902.5017674, c0b50299-41b1-48cf-b075-08ca569a1bd5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.504 189300 INFO nova.compute.manager [-] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.551 189300 DEBUG nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.553 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.554 189300 INFO nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Creating image(s)#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.554 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquiring lock "/var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.555 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "/var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.555 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "/var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.572 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.574 189300 DEBUG nova.compute.manager [None req-9d5578a1-3208-4836-aba4-19ae20ff4bdf - - - - - -] [instance: c0b50299-41b1-48cf-b075-08ca569a1bd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.575 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.646 189300 DEBUG nova.policy [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4a66bec161e46a6ba097408338141a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9848e024a7d14a6c9665c58283238c37', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.655 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.656 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquiring lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.656 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.671 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.726 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.727 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.776 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.778 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.780 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.842 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.844 189300 DEBUG nova.virt.disk.api [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Checking if we can resize image /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.845 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.906 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.907 189300 DEBUG nova.virt.disk.api [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Cannot resize image /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.908 189300 DEBUG nova.objects.instance [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lazy-loading 'migration_context' on Instance uuid b8886654-0bcc-4b6e-a66e-aa6365e827f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.910 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.927 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.928 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Ensure instance console log exists: /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.929 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.929 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:37 compute-0 nova_compute[189296]: 2025-11-28 18:18:37.930 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:38 compute-0 nova_compute[189296]: 2025-11-28 18:18:38.595 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:38 compute-0 nova_compute[189296]: 2025-11-28 18:18:38.770 189300 DEBUG nova.network.neutron [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Successfully created port: 083a607a-fb99-42ad-a35d-408d472897cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 18:18:41 compute-0 nova_compute[189296]: 2025-11-28 18:18:41.610 189300 DEBUG nova.network.neutron [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Successfully updated port: 083a607a-fb99-42ad-a35d-408d472897cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:18:41 compute-0 nova_compute[189296]: 2025-11-28 18:18:41.651 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquiring lock "refresh_cache-b8886654-0bcc-4b6e-a66e-aa6365e827f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:41 compute-0 nova_compute[189296]: 2025-11-28 18:18:41.651 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquired lock "refresh_cache-b8886654-0bcc-4b6e-a66e-aa6365e827f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:41 compute-0 nova_compute[189296]: 2025-11-28 18:18:41.652 189300 DEBUG nova.network.neutron [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:18:41 compute-0 nova_compute[189296]: 2025-11-28 18:18:41.983 189300 DEBUG nova.compute.manager [req-d363a097-96ed-45ca-9d9f-95b4c4c053b2 req-2305a6da-7cd2-4a50-ba8c-ee2f1044ba0e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Received event network-changed-083a607a-fb99-42ad-a35d-408d472897cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:41 compute-0 nova_compute[189296]: 2025-11-28 18:18:41.984 189300 DEBUG nova.compute.manager [req-d363a097-96ed-45ca-9d9f-95b4c4c053b2 req-2305a6da-7cd2-4a50-ba8c-ee2f1044ba0e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Refreshing instance network info cache due to event network-changed-083a607a-fb99-42ad-a35d-408d472897cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:18:41 compute-0 nova_compute[189296]: 2025-11-28 18:18:41.985 189300 DEBUG oslo_concurrency.lockutils [req-d363a097-96ed-45ca-9d9f-95b4c4c053b2 req-2305a6da-7cd2-4a50-ba8c-ee2f1044ba0e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-b8886654-0bcc-4b6e-a66e-aa6365e827f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:42 compute-0 nova_compute[189296]: 2025-11-28 18:18:42.162 189300 DEBUG nova.network.neutron [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:18:42 compute-0 nova_compute[189296]: 2025-11-28 18:18:42.542 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:42 compute-0 ovn_controller[97771]: 2025-11-28T18:18:42Z|00104|binding|INFO|Releasing lport c8eddf3b-1e0b-416b-ad1a-748f52f665f0 from this chassis (sb_readonly=0)
Nov 28 18:18:42 compute-0 nova_compute[189296]: 2025-11-28 18:18:42.574 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:42 compute-0 nova_compute[189296]: 2025-11-28 18:18:42.639 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:43 compute-0 podman[248974]: 2025-11-28 18:18:43.067461574 +0000 UTC m=+0.104223576 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 28 18:18:43 compute-0 podman[248973]: 2025-11-28 18:18:43.069815202 +0000 UTC m=+0.109214319 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 28 18:18:43 compute-0 podman[248972]: 2025-11-28 18:18:43.077997642 +0000 UTC m=+0.124303488 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public)
Nov 28 18:18:43 compute-0 nova_compute[189296]: 2025-11-28 18:18:43.599 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.701 189300 DEBUG nova.network.neutron [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Updating instance_info_cache with network_info: [{"id": "083a607a-fb99-42ad-a35d-408d472897cf", "address": "fa:16:3e:d8:e4:d2", "network": {"id": "767cff4d-c983-406c-a89f-ce8a60b36587", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-310277457-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9848e024a7d14a6c9665c58283238c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap083a607a-fb", "ovs_interfaceid": "083a607a-fb99-42ad-a35d-408d472897cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.737 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Releasing lock "refresh_cache-b8886654-0bcc-4b6e-a66e-aa6365e827f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.738 189300 DEBUG nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Instance network_info: |[{"id": "083a607a-fb99-42ad-a35d-408d472897cf", "address": "fa:16:3e:d8:e4:d2", "network": {"id": "767cff4d-c983-406c-a89f-ce8a60b36587", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-310277457-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9848e024a7d14a6c9665c58283238c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap083a607a-fb", "ovs_interfaceid": "083a607a-fb99-42ad-a35d-408d472897cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.738 189300 DEBUG oslo_concurrency.lockutils [req-d363a097-96ed-45ca-9d9f-95b4c4c053b2 req-2305a6da-7cd2-4a50-ba8c-ee2f1044ba0e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-b8886654-0bcc-4b6e-a66e-aa6365e827f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.739 189300 DEBUG nova.network.neutron [req-d363a097-96ed-45ca-9d9f-95b4c4c053b2 req-2305a6da-7cd2-4a50-ba8c-ee2f1044ba0e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Refreshing network info cache for port 083a607a-fb99-42ad-a35d-408d472897cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.742 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Start _get_guest_xml network_info=[{"id": "083a607a-fb99-42ad-a35d-408d472897cf", "address": "fa:16:3e:d8:e4:d2", "network": {"id": "767cff4d-c983-406c-a89f-ce8a60b36587", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-310277457-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9848e024a7d14a6c9665c58283238c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap083a607a-fb", "ovs_interfaceid": "083a607a-fb99-42ad-a35d-408d472897cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.749 189300 WARNING nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.755 189300 DEBUG nova.virt.libvirt.host [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.756 189300 DEBUG nova.virt.libvirt.host [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.764 189300 DEBUG nova.virt.libvirt.host [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.765 189300 DEBUG nova.virt.libvirt.host [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.766 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.766 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.767 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.767 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.768 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.768 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.769 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.769 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.769 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.770 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.770 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.771 189300 DEBUG nova.virt.hardware [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.775 189300 DEBUG nova.virt.libvirt.vif [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:18:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-600273819',display_name='tempest-ServerAddressesTestJSON-server-600273819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-600273819',id=10,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9848e024a7d14a6c9665c58283238c37',ramdisk_id='',reservation_id='r-b24snvz2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-122096787',owner_user_name='tempest-ServerAddressesTestJSON-122096787-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:18:37Z,user_data=None,user_id='d4a66bec161e46a6ba097408338141a1',uuid=b8886654-0bcc-4b6e-a66e-aa6365e827f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "083a607a-fb99-42ad-a35d-408d472897cf", "address": "fa:16:3e:d8:e4:d2", "network": {"id": "767cff4d-c983-406c-a89f-ce8a60b36587", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-310277457-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9848e024a7d14a6c9665c58283238c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap083a607a-fb", "ovs_interfaceid": "083a607a-fb99-42ad-a35d-408d472897cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.776 189300 DEBUG nova.network.os_vif_util [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Converting VIF {"id": "083a607a-fb99-42ad-a35d-408d472897cf", "address": "fa:16:3e:d8:e4:d2", "network": {"id": "767cff4d-c983-406c-a89f-ce8a60b36587", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-310277457-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9848e024a7d14a6c9665c58283238c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap083a607a-fb", "ovs_interfaceid": "083a607a-fb99-42ad-a35d-408d472897cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.777 189300 DEBUG nova.network.os_vif_util [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:e4:d2,bridge_name='br-int',has_traffic_filtering=True,id=083a607a-fb99-42ad-a35d-408d472897cf,network=Network(767cff4d-c983-406c-a89f-ce8a60b36587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap083a607a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.779 189300 DEBUG nova.objects.instance [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8886654-0bcc-4b6e-a66e-aa6365e827f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.816 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <uuid>b8886654-0bcc-4b6e-a66e-aa6365e827f3</uuid>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <name>instance-0000000a</name>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <nova:name>tempest-ServerAddressesTestJSON-server-600273819</nova:name>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:18:44</nova:creationTime>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:18:44 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:18:44 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:18:44 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:18:44 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:18:44 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:18:44 compute-0 nova_compute[189296]:        <nova:user uuid="d4a66bec161e46a6ba097408338141a1">tempest-ServerAddressesTestJSON-122096787-project-member</nova:user>
Nov 28 18:18:44 compute-0 nova_compute[189296]:        <nova:project uuid="9848e024a7d14a6c9665c58283238c37">tempest-ServerAddressesTestJSON-122096787</nova:project>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="ffec9e61-65fb-46ae-8d34-338639229ec3"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:18:44 compute-0 nova_compute[189296]:        <nova:port uuid="083a607a-fb99-42ad-a35d-408d472897cf">
Nov 28 18:18:44 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <system>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <entry name="serial">b8886654-0bcc-4b6e-a66e-aa6365e827f3</entry>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <entry name="uuid">b8886654-0bcc-4b6e-a66e-aa6365e827f3</entry>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    </system>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <os>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  </os>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <features>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  </features>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.config"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:d8:e4:d2"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <target dev="tap083a607a-fb"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/console.log" append="off"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <video>
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    </video>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:18:44 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:18:44 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:18:44 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:18:44 compute-0 nova_compute[189296]: </domain>
Nov 28 18:18:44 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.817 189300 DEBUG nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Preparing to wait for external event network-vif-plugged-083a607a-fb99-42ad-a35d-408d472897cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.818 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquiring lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.818 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.819 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.820 189300 DEBUG nova.virt.libvirt.vif [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:18:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-600273819',display_name='tempest-ServerAddressesTestJSON-server-600273819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-600273819',id=10,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9848e024a7d14a6c9665c58283238c37',ramdisk_id='',reservation_id='r-b24snvz2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-122096787',owner_user_name='tempest-ServerAddressesTestJSON-122096787-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:18:37Z,user_data=None,user_id='d4a66bec161e46a6ba097408338141a1',uuid=b8886654-0bcc-4b6e-a66e-aa6365e827f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "083a607a-fb99-42ad-a35d-408d472897cf", "address": "fa:16:3e:d8:e4:d2", "network": {"id": "767cff4d-c983-406c-a89f-ce8a60b36587", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-310277457-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9848e024a7d14a6c9665c58283238c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap083a607a-fb", "ovs_interfaceid": "083a607a-fb99-42ad-a35d-408d472897cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.820 189300 DEBUG nova.network.os_vif_util [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Converting VIF {"id": "083a607a-fb99-42ad-a35d-408d472897cf", "address": "fa:16:3e:d8:e4:d2", "network": {"id": "767cff4d-c983-406c-a89f-ce8a60b36587", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-310277457-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9848e024a7d14a6c9665c58283238c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap083a607a-fb", "ovs_interfaceid": "083a607a-fb99-42ad-a35d-408d472897cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.821 189300 DEBUG nova.network.os_vif_util [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:e4:d2,bridge_name='br-int',has_traffic_filtering=True,id=083a607a-fb99-42ad-a35d-408d472897cf,network=Network(767cff4d-c983-406c-a89f-ce8a60b36587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap083a607a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.822 189300 DEBUG os_vif [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:e4:d2,bridge_name='br-int',has_traffic_filtering=True,id=083a607a-fb99-42ad-a35d-408d472897cf,network=Network(767cff4d-c983-406c-a89f-ce8a60b36587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap083a607a-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.822 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.823 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.824 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.827 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.828 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap083a607a-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.828 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap083a607a-fb, col_values=(('external_ids', {'iface-id': '083a607a-fb99-42ad-a35d-408d472897cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:e4:d2', 'vm-uuid': 'b8886654-0bcc-4b6e-a66e-aa6365e827f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.831 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:44 compute-0 NetworkManager[56307]: <info>  [1764353924.8342] manager: (tap083a607a-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.838 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.844 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.845 189300 INFO os_vif [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:e4:d2,bridge_name='br-int',has_traffic_filtering=True,id=083a607a-fb99-42ad-a35d-408d472897cf,network=Network(767cff4d-c983-406c-a89f-ce8a60b36587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap083a607a-fb')#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.984 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.985 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.986 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] No VIF found with MAC fa:16:3e:d8:e4:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:18:44 compute-0 nova_compute[189296]: 2025-11-28 18:18:44.988 189300 INFO nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Using config drive#033[00m
Nov 28 18:18:45 compute-0 ovn_controller[97771]: 2025-11-28T18:18:45Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:70:8b 10.100.0.4
Nov 28 18:18:45 compute-0 ovn_controller[97771]: 2025-11-28T18:18:45Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:70:8b 10.100.0.4
Nov 28 18:18:48 compute-0 podman[249050]: 2025-11-28 18:18:48.02553725 +0000 UTC m=+0.083577469 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.041 189300 INFO nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Creating config drive at /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.config#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.046 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph6f3r3a9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:48 compute-0 podman[249049]: 2025-11-28 18:18:48.060015425 +0000 UTC m=+0.110082431 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.176 189300 DEBUG oslo_concurrency.processutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph6f3r3a9" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:48 compute-0 NetworkManager[56307]: <info>  [1764353928.2630] manager: (tap083a607a-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 28 18:18:48 compute-0 kernel: tap083a607a-fb: entered promiscuous mode
Nov 28 18:18:48 compute-0 ovn_controller[97771]: 2025-11-28T18:18:48Z|00105|binding|INFO|Claiming lport 083a607a-fb99-42ad-a35d-408d472897cf for this chassis.
Nov 28 18:18:48 compute-0 ovn_controller[97771]: 2025-11-28T18:18:48Z|00106|binding|INFO|083a607a-fb99-42ad-a35d-408d472897cf: Claiming fa:16:3e:d8:e4:d2 10.100.0.8
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.265 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.283 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:e4:d2 10.100.0.8'], port_security=['fa:16:3e:d8:e4:d2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b8886654-0bcc-4b6e-a66e-aa6365e827f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-767cff4d-c983-406c-a89f-ce8a60b36587', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9848e024a7d14a6c9665c58283238c37', 'neutron:revision_number': '2', 'neutron:security_group_ids': '944df12e-66df-4054-adad-89252fda4f64', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f08329fe-a5f6-40a4-b5a3-7cf13174dc88, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=083a607a-fb99-42ad-a35d-408d472897cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.285 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 083a607a-fb99-42ad-a35d-408d472897cf in datapath 767cff4d-c983-406c-a89f-ce8a60b36587 bound to our chassis#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.286 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 767cff4d-c983-406c-a89f-ce8a60b36587#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.289 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:48 compute-0 ovn_controller[97771]: 2025-11-28T18:18:48Z|00107|binding|INFO|Setting lport 083a607a-fb99-42ad-a35d-408d472897cf ovn-installed in OVS
Nov 28 18:18:48 compute-0 ovn_controller[97771]: 2025-11-28T18:18:48Z|00108|binding|INFO|Setting lport 083a607a-fb99-42ad-a35d-408d472897cf up in Southbound
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.295 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.300 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[0410094a-b457-4dfe-9a66-8b5e02be382b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.300 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap767cff4d-c1 in ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.303 238909 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap767cff4d-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.303 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[84b6f374-0f1a-418d-b638-3b1fdb428478]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.304 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[01144571-aa81-4bbe-89ec-8132d4efd7b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.315 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[5229f791-12d7-4958-ac19-6167c49b10c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 systemd-udevd[249123]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:18:48 compute-0 systemd-machined[155703]: New machine qemu-10-instance-0000000a.
Nov 28 18:18:48 compute-0 NetworkManager[56307]: <info>  [1764353928.3314] device (tap083a607a-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:18:48 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 28 18:18:48 compute-0 NetworkManager[56307]: <info>  [1764353928.3323] device (tap083a607a-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.342 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1d1f8e-bdf6-4c76-b8ab-3c9a742be0d0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 podman[249095]: 2025-11-28 18:18:48.360101762 +0000 UTC m=+0.110323135 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.380 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[a99dc8a4-8ca5-4176-86c2-3f8b4b19f271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 systemd-udevd[249136]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:18:48 compute-0 NetworkManager[56307]: <info>  [1764353928.3877] manager: (tap767cff4d-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.387 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[3770fde0-5c35-40a0-90a0-cd92770a6d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 podman[249096]: 2025-11-28 18:18:48.391213215 +0000 UTC m=+0.134678003 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4)
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.427 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[366abe48-fbc3-439a-aefe-7bef77df4a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.431 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c00719-865f-400d-a02d-80b89261a13c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 NetworkManager[56307]: <info>  [1764353928.4515] device (tap767cff4d-c0): carrier: link connected
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.457 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[c814ae6f-18b4-4e9e-bee5-a6deea5ce09b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.473 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[5da10974-a811-4254-8707-10f54428f231]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap767cff4d-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:f6:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507404, 'reachable_time': 30035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249176, 'error': None, 'target': 'ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.487 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[11b89973-7a5d-4992-a407-61cd386bb8a2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:f638'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507404, 'tstamp': 507404}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249177, 'error': None, 'target': 'ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.507 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[bd0252ff-6c44-46f3-9d50-b0a39512300b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap767cff4d-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:f6:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507404, 'reachable_time': 30035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249178, 'error': None, 'target': 'ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.541 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[328cab85-16e2-4707-aa26-6f089896a740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.602 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.608 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e0113e99-b01f-44ed-85f8-ec67b078e8f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.610 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap767cff4d-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.610 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.610 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap767cff4d-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:48 compute-0 NetworkManager[56307]: <info>  [1764353928.6137] manager: (tap767cff4d-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 28 18:18:48 compute-0 kernel: tap767cff4d-c0: entered promiscuous mode
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.618 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap767cff4d-c0, col_values=(('external_ids', {'iface-id': '3a0fe0b8-6777-41f3-9172-cba88c038dea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.613 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:48 compute-0 ovn_controller[97771]: 2025-11-28T18:18:48Z|00109|binding|INFO|Releasing lport 3a0fe0b8-6777-41f3-9172-cba88c038dea from this chassis (sb_readonly=0)
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.625 106624 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/767cff4d-c983-406c-a89f-ce8a60b36587.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/767cff4d-c983-406c-a89f-ce8a60b36587.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.625 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.626 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[5565a1a2-d823-4de4-a4c9-b31824f726d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.627 106624 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: global
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    log         /dev/log local0 debug
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    log-tag     haproxy-metadata-proxy-767cff4d-c983-406c-a89f-ce8a60b36587
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    user        root
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    group       root
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    maxconn     1024
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    pidfile     /var/lib/neutron/external/pids/767cff4d-c983-406c-a89f-ce8a60b36587.pid.haproxy
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    daemon
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: defaults
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    log global
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    mode http
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    option httplog
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    option dontlognull
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    option http-server-close
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    option forwardfor
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    retries                 3
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    timeout http-request    30s
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    timeout connect         30s
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    timeout client          32s
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    timeout server          32s
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    timeout http-keep-alive 30s
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: listen listener
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    bind 169.254.169.254:80
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    server metadata /var/lib/neutron/metadata_proxy
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]:    http-request add-header X-OVN-Network-ID 767cff4d-c983-406c-a89f-ce8a60b36587
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 28 18:18:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:48.627 106624 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587', 'env', 'PROCESS_TAG=haproxy-767cff4d-c983-406c-a89f-ce8a60b36587', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/767cff4d-c983-406c-a89f-ce8a60b36587.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.640 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.678 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353928.6775315, b8886654-0bcc-4b6e-a66e-aa6365e827f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.678 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] VM Started (Lifecycle Event)#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.778 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.784 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353928.6776903, b8886654-0bcc-4b6e-a66e-aa6365e827f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.784 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.854 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.860 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:18:48 compute-0 nova_compute[189296]: 2025-11-28 18:18:48.954 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:18:49 compute-0 podman[249214]: 2025-11-28 18:18:49.050388925 +0000 UTC m=+0.101076949 container create be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 28 18:18:49 compute-0 podman[249214]: 2025-11-28 18:18:48.977217181 +0000 UTC m=+0.027905225 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 18:18:49 compute-0 systemd[1]: Started libpod-conmon-be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba.scope.
Nov 28 18:18:49 compute-0 systemd[1]: Started libcrun container.
Nov 28 18:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b6b2b174b83f398487b70453fc4033bf204a545601b9f032738b948a34b662/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 18:18:49 compute-0 podman[249214]: 2025-11-28 18:18:49.160956955 +0000 UTC m=+0.211644999 container init be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:18:49 compute-0 podman[249214]: 2025-11-28 18:18:49.168817208 +0000 UTC m=+0.219505232 container start be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true)
Nov 28 18:18:49 compute-0 neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587[249230]: [NOTICE]   (249234) : New worker (249236) forked
Nov 28 18:18:49 compute-0 neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587[249230]: [NOTICE]   (249234) : Loading success.
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.639 189300 DEBUG nova.compute.manager [req-a9b53bc6-4673-4d25-8dcf-91df07427361 req-5fa09ee3-a179-40ff-aec7-06ac1fea3b88 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Received event network-vif-plugged-083a607a-fb99-42ad-a35d-408d472897cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.640 189300 DEBUG oslo_concurrency.lockutils [req-a9b53bc6-4673-4d25-8dcf-91df07427361 req-5fa09ee3-a179-40ff-aec7-06ac1fea3b88 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.641 189300 DEBUG oslo_concurrency.lockutils [req-a9b53bc6-4673-4d25-8dcf-91df07427361 req-5fa09ee3-a179-40ff-aec7-06ac1fea3b88 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.641 189300 DEBUG oslo_concurrency.lockutils [req-a9b53bc6-4673-4d25-8dcf-91df07427361 req-5fa09ee3-a179-40ff-aec7-06ac1fea3b88 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.642 189300 DEBUG nova.compute.manager [req-a9b53bc6-4673-4d25-8dcf-91df07427361 req-5fa09ee3-a179-40ff-aec7-06ac1fea3b88 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Processing event network-vif-plugged-083a607a-fb99-42ad-a35d-408d472897cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.644 189300 DEBUG nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.649 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353929.6488848, b8886654-0bcc-4b6e-a66e-aa6365e827f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.649 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.653 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.659 189300 INFO nova.virt.libvirt.driver [-] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Instance spawned successfully.#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.661 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.714 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.714 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.715 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.716 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.716 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.716 189300 DEBUG nova.virt.libvirt.driver [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.720 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.725 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.812 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.831 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.876 189300 INFO nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Took 12.32 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:18:49 compute-0 nova_compute[189296]: 2025-11-28 18:18:49.876 189300 DEBUG nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:18:50 compute-0 nova_compute[189296]: 2025-11-28 18:18:50.047 189300 INFO nova.compute.manager [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Took 12.92 seconds to build instance.#033[00m
Nov 28 18:18:50 compute-0 nova_compute[189296]: 2025-11-28 18:18:50.114 189300 DEBUG oslo_concurrency.lockutils [None req-45a3b199-bcf8-4d75-b270-41769000e461 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:51 compute-0 nova_compute[189296]: 2025-11-28 18:18:51.249 189300 DEBUG nova.network.neutron [req-d363a097-96ed-45ca-9d9f-95b4c4c053b2 req-2305a6da-7cd2-4a50-ba8c-ee2f1044ba0e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Updated VIF entry in instance network info cache for port 083a607a-fb99-42ad-a35d-408d472897cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:18:51 compute-0 nova_compute[189296]: 2025-11-28 18:18:51.249 189300 DEBUG nova.network.neutron [req-d363a097-96ed-45ca-9d9f-95b4c4c053b2 req-2305a6da-7cd2-4a50-ba8c-ee2f1044ba0e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Updating instance_info_cache with network_info: [{"id": "083a607a-fb99-42ad-a35d-408d472897cf", "address": "fa:16:3e:d8:e4:d2", "network": {"id": "767cff4d-c983-406c-a89f-ce8a60b36587", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-310277457-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9848e024a7d14a6c9665c58283238c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap083a607a-fb", "ovs_interfaceid": "083a607a-fb99-42ad-a35d-408d472897cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:51 compute-0 nova_compute[189296]: 2025-11-28 18:18:51.396 189300 DEBUG oslo_concurrency.lockutils [req-d363a097-96ed-45ca-9d9f-95b4c4c053b2 req-2305a6da-7cd2-4a50-ba8c-ee2f1044ba0e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-b8886654-0bcc-4b6e-a66e-aa6365e827f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:51 compute-0 nova_compute[189296]: 2025-11-28 18:18:51.807 189300 DEBUG nova.compute.manager [req-1fc29ab1-73d2-4976-8ef3-cfd5fa76bcf5 req-ea9f16ba-bc00-494b-a2c3-28dc8dfa7a43 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Received event network-vif-plugged-083a607a-fb99-42ad-a35d-408d472897cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:51 compute-0 nova_compute[189296]: 2025-11-28 18:18:51.808 189300 DEBUG oslo_concurrency.lockutils [req-1fc29ab1-73d2-4976-8ef3-cfd5fa76bcf5 req-ea9f16ba-bc00-494b-a2c3-28dc8dfa7a43 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:51 compute-0 nova_compute[189296]: 2025-11-28 18:18:51.808 189300 DEBUG oslo_concurrency.lockutils [req-1fc29ab1-73d2-4976-8ef3-cfd5fa76bcf5 req-ea9f16ba-bc00-494b-a2c3-28dc8dfa7a43 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:51 compute-0 nova_compute[189296]: 2025-11-28 18:18:51.808 189300 DEBUG oslo_concurrency.lockutils [req-1fc29ab1-73d2-4976-8ef3-cfd5fa76bcf5 req-ea9f16ba-bc00-494b-a2c3-28dc8dfa7a43 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:51 compute-0 nova_compute[189296]: 2025-11-28 18:18:51.809 189300 DEBUG nova.compute.manager [req-1fc29ab1-73d2-4976-8ef3-cfd5fa76bcf5 req-ea9f16ba-bc00-494b-a2c3-28dc8dfa7a43 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] No waiting events found dispatching network-vif-plugged-083a607a-fb99-42ad-a35d-408d472897cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:18:51 compute-0 nova_compute[189296]: 2025-11-28 18:18:51.809 189300 WARNING nova.compute.manager [req-1fc29ab1-73d2-4976-8ef3-cfd5fa76bcf5 req-ea9f16ba-bc00-494b-a2c3-28dc8dfa7a43 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Received unexpected event network-vif-plugged-083a607a-fb99-42ad-a35d-408d472897cf for instance with vm_state active and task_state None.#033[00m
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.984 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.985 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.985 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.986 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.986 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f2acc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.992 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1b9021c0-08c4-448d-9f6c-a589a543fb4c from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 18:18:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:51.993 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1b9021c0-08c4-448d-9f6c-a589a543fb4c -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 18:18:52 compute-0 podman[249247]: 2025-11-28 18:18:52.091069729 +0000 UTC m=+0.146496963 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 28 18:18:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:52.631 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:52.631 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:52.632 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:52 compute-0 nova_compute[189296]: 2025-11-28 18:18:52.688 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "0af9c8e6-8030-462a-9dfd-d52f041685f5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:52 compute-0 nova_compute[189296]: 2025-11-28 18:18:52.689 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:52 compute-0 nova_compute[189296]: 2025-11-28 18:18:52.728 189300 DEBUG nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:18:52 compute-0 nova_compute[189296]: 2025-11-28 18:18:52.741 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:52 compute-0 nova_compute[189296]: 2025-11-28 18:18:52.903 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:52 compute-0 nova_compute[189296]: 2025-11-28 18:18:52.904 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:52 compute-0 nova_compute[189296]: 2025-11-28 18:18:52.913 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:18:52 compute-0 nova_compute[189296]: 2025-11-28 18:18:52.913 189300 INFO nova.compute.claims [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:18:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:52.937 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1993 Content-Type: application/json Date: Fri, 28 Nov 2025 18:18:52 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-87f3a0eb-32b9-409d-81e2-8174de294bfa x-openstack-request-id: req-87f3a0eb-32b9-409d-81e2-8174de294bfa _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 18:18:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:52.937 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1b9021c0-08c4-448d-9f6c-a589a543fb4c", "name": "tempest-AttachInterfacesUnderV243Test-server-403870488", "status": "ACTIVE", "tenant_id": "05214746198d48dea7b8b3617f29cb40", "user_id": "f140e7d00b1542d087d5f92a53ef5082", "metadata": {}, "hostId": "4bd51d575c3c9b7bdbe99da37969093c911b27fc680dbac48790f240", "image": {"id": "ffec9e61-65fb-46ae-8d34-338639229ec3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ffec9e61-65fb-46ae-8d34-338639229ec3"}]}, "flavor": {"id": "b177f611-8f79-4bfd-9a12-e83e9545757b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b177f611-8f79-4bfd-9a12-e83e9545757b"}]}, "created": "2025-11-28T18:17:52Z", "updated": "2025-11-28T18:18:14Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1705465512-network": [{"version": 4, "addr": "10.100.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3f:70:8b"}, {"version": 4, "addr": "192.168.122.181", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3f:70:8b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1b9021c0-08c4-448d-9f6c-a589a543fb4c"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1b9021c0-08c4-448d-9f6c-a589a543fb4c"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-20086383", "OS-SRV-USG:launched_at": "2025-11-28T18:18:14.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--584024297"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 18:18:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:52.937 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1b9021c0-08c4-448d-9f6c-a589a543fb4c used request id req-87f3a0eb-32b9-409d-81e2-8174de294bfa request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 18:18:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:52.939 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1b9021c0-08c4-448d-9f6c-a589a543fb4c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-403870488', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '05214746198d48dea7b8b3617f29cb40', 'user_id': 'f140e7d00b1542d087d5f92a53ef5082', 'hostId': '4bd51d575c3c9b7bdbe99da37969093c911b27fc680dbac48790f240', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:18:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:52.943 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b8886654-0bcc-4b6e-a66e-aa6365e827f3 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 18:18:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:52.944 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b8886654-0bcc-4b6e-a66e-aa6365e827f3 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.160 189300 DEBUG nova.compute.provider_tree [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.515 189300 DEBUG nova.scheduler.client.report [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.662 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.665 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.665 189300 DEBUG nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.756 189300 DEBUG nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.757 189300 DEBUG nova.network.neutron [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.801 189300 INFO nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.830 189300 DEBUG nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.924 189300 DEBUG nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.925 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.926 189300 INFO nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Creating image(s)#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.926 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "/var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.927 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "/var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.927 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "/var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:53 compute-0 nova_compute[189296]: 2025-11-28 18:18:53.940 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.003 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.004 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.004 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.015 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.070 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.071 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.110 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.111 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.112 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.165 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.166 189300 DEBUG nova.virt.disk.api [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Checking if we can resize image /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.166 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.249 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.250 189300 DEBUG nova.virt.disk.api [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Cannot resize image /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.250 189300 DEBUG nova.objects.instance [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lazy-loading 'migration_context' on Instance uuid 0af9c8e6-8030-462a-9dfd-d52f041685f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.264 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.265 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Ensure instance console log exists: /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.265 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.265 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.266 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.701 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1811 Content-Type: application/json Date: Fri, 28 Nov 2025 18:18:52 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-0185ebf6-293e-4415-9a8e-e066a7760523 x-openstack-request-id: req-0185ebf6-293e-4415-9a8e-e066a7760523 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.701 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b8886654-0bcc-4b6e-a66e-aa6365e827f3", "name": "tempest-ServerAddressesTestJSON-server-600273819", "status": "ACTIVE", "tenant_id": "9848e024a7d14a6c9665c58283238c37", "user_id": "d4a66bec161e46a6ba097408338141a1", "metadata": {}, "hostId": "dcd847b0ca04572bc55b46c5600975fc2efcc193b59bae6c7f4c0243", "image": {"id": "ffec9e61-65fb-46ae-8d34-338639229ec3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ffec9e61-65fb-46ae-8d34-338639229ec3"}]}, "flavor": {"id": "b177f611-8f79-4bfd-9a12-e83e9545757b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b177f611-8f79-4bfd-9a12-e83e9545757b"}]}, "created": "2025-11-28T18:18:35Z", "updated": "2025-11-28T18:18:50Z", "addresses": {"tempest-ServerAddressesTestJSON-310277457-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d8:e4:d2"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b8886654-0bcc-4b6e-a66e-aa6365e827f3"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b8886654-0bcc-4b6e-a66e-aa6365e827f3"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-28T18:18:49.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.701 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b8886654-0bcc-4b6e-a66e-aa6365e827f3 used request id req-0185ebf6-293e-4415-9a8e-e066a7760523 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.702 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b8886654-0bcc-4b6e-a66e-aa6365e827f3', 'name': 'tempest-ServerAddressesTestJSON-server-600273819', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '9848e024a7d14a6c9665c58283238c37', 'user_id': 'd4a66bec161e46a6ba097408338141a1', 'hostId': 'dcd847b0ca04572bc55b46c5600975fc2efcc193b59bae6c7f4c0243', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.703 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.703 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.703 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.703 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:18:54.703420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.722 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.722 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.736 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.736 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.737 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.737 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.737 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.737 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.737 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.737 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:18:54.737615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.738 189300 DEBUG nova.policy [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0052e0d91c7e4c98bd11644a4dca818a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c41bbf2b30ca428fbd489c3dc29e8045', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.772 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.read.bytes volume: 31025664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.773 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.812 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.813 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.813 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.813 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.813 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.813 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.814 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.814 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.read.latency volume: 641344878 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.814 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.read.latency volume: 59768988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.814 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.read.latency volume: 356019525 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.814 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.read.latency volume: 625616 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.815 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:18:54.814007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.815 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.815 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.815 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.815 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:18:54.815750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.819 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1b9021c0-08c4-448d-9f6c-a589a543fb4c / tapc1a2ec90-a4 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.819 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.828 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b8886654-0bcc-4b6e-a66e-aa6365e827f3 / tap083a607a-fb inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.829 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.831 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.831 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.831 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.832 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.832 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.832 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:18:54.832231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.832 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 nova_compute[189296]: 2025-11-28 18:18:54.833 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.857 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/memory.usage volume: 42.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.881 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.881 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance b8886654-0bcc-4b6e-a66e-aa6365e827f3: ceilometer.compute.pollsters.NoVolumeException
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.881 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.881 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.882 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.882 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.882 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.882 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.882 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.882 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:18:54.882396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.883 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.884 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.885 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.886 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.887 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.887 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.887 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.888 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.888 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.889 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:18:54.888828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.889 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.write.bytes volume: 72916992 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.890 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.891 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.891 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.893 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.894 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.894 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.894 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.894 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.894 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.894 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.894 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.895 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.895 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.895 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.895 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:18:54.894535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.895 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.895 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.895 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.write.latency volume: 3493472337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.896 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.896 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.896 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.896 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.896 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.897 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.897 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.897 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:18:54.895795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.897 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.897 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.write.requests volume: 271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:18:54.897468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.897 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.898 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.898 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.899 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.900 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.900 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.901 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.901 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.902 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:18:54.902199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.903 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.904 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.905 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.906 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.907 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.907 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.908 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:18:54.908472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.908 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.909 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.910 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.911 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.912 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.912 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.913 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.913 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.913 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.913 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/cpu volume: 30650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.913 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/cpu volume: 5090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.913 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.913 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:18:54.913301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-403870488>, <NovaLikeServer: tempest-ServerAddressesTestJSON-server-600273819>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-403870488>, <NovaLikeServer: tempest-ServerAddressesTestJSON-server-600273819>]
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-28T18:18:54.914337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.914 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.915 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.915 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.915 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.915 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.915 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.915 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.916 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.916 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.916 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.916 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.916 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.916 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.916 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.917 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.917 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.917 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.917 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.917 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.917 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.917 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.918 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.918 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.918 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:18:54.915156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:18:54.916079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:18:54.917248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:18:54.918247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.920 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.921 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.923 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.924 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.924 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.925 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.926 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:18:54.926472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.927 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/network.outgoing.bytes volume: 3320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.929 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.931 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.932 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.932 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.932 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.932 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.932 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.932 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.933 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.935 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.935 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:18:54.932496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.936 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.937 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.937 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.937 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.937 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.937 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.937 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.937 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:18:54.937767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.938 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.938 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.938 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.938 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.938 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.939 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.939 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-28T18:18:54.939194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.939 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.939 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-403870488>, <NovaLikeServer: tempest-ServerAddressesTestJSON-server-600273819>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-403870488>, <NovaLikeServer: tempest-ServerAddressesTestJSON-server-600273819>]
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.939 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.940 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.940 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.940 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.940 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:18:54.940426) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.940 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/network.incoming.bytes volume: 4269 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.940 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.941 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.941 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.941 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.941 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.941 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.942 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.942 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:18:54.941952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.942 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/network.outgoing.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.942 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.942 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.943 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.943 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.943 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.943 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.943 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.943 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.943 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.944 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:18:54.943468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.944 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.944 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.944 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.944 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.945 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.945 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.read.requests volume: 1137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:18:54.944999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.945 15 DEBUG ceilometer.compute.pollsters [-] 1b9021c0-08c4-448d-9f6c-a589a543fb4c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.945 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.946 15 DEBUG ceilometer.compute.pollsters [-] b8886654-0bcc-4b6e-a66e-aa6365e827f3/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.946 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.946 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.947 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.948 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:54 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:18:54.949 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.214 189300 DEBUG oslo_concurrency.lockutils [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquiring lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.215 189300 DEBUG oslo_concurrency.lockutils [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.215 189300 DEBUG oslo_concurrency.lockutils [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquiring lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.215 189300 DEBUG oslo_concurrency.lockutils [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.215 189300 DEBUG oslo_concurrency.lockutils [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.217 189300 INFO nova.compute.manager [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Terminating instance#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.217 189300 DEBUG nova.compute.manager [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:18:56 compute-0 kernel: tap083a607a-fb (unregistering): left promiscuous mode
Nov 28 18:18:56 compute-0 NetworkManager[56307]: <info>  [1764353936.2455] device (tap083a607a-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.256 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 ovn_controller[97771]: 2025-11-28T18:18:56Z|00110|binding|INFO|Releasing lport 083a607a-fb99-42ad-a35d-408d472897cf from this chassis (sb_readonly=0)
Nov 28 18:18:56 compute-0 ovn_controller[97771]: 2025-11-28T18:18:56Z|00111|binding|INFO|Setting lport 083a607a-fb99-42ad-a35d-408d472897cf down in Southbound
Nov 28 18:18:56 compute-0 ovn_controller[97771]: 2025-11-28T18:18:56Z|00112|binding|INFO|Removing iface tap083a607a-fb ovn-installed in OVS
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.260 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.289 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:e4:d2 10.100.0.8'], port_security=['fa:16:3e:d8:e4:d2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b8886654-0bcc-4b6e-a66e-aa6365e827f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-767cff4d-c983-406c-a89f-ce8a60b36587', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9848e024a7d14a6c9665c58283238c37', 'neutron:revision_number': '4', 'neutron:security_group_ids': '944df12e-66df-4054-adad-89252fda4f64', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f08329fe-a5f6-40a4-b5a3-7cf13174dc88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=083a607a-fb99-42ad-a35d-408d472897cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.291 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.293 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 083a607a-fb99-42ad-a35d-408d472897cf in datapath 767cff4d-c983-406c-a89f-ce8a60b36587 unbound from our chassis#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.295 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 767cff4d-c983-406c-a89f-ce8a60b36587, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.296 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb76987-9f66-428c-9523-473054230b5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.296 106624 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587 namespace which is not needed anymore#033[00m
Nov 28 18:18:56 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 28 18:18:56 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 7.129s CPU time.
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.332 189300 DEBUG nova.network.neutron [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Successfully created port: 7a69f46e-77c5-4129-9783-254170a7422b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 18:18:56 compute-0 systemd-machined[155703]: Machine qemu-10-instance-0000000a terminated.
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.444 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.451 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587[249230]: [NOTICE]   (249234) : haproxy version is 2.8.14-c23fe91
Nov 28 18:18:56 compute-0 neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587[249230]: [NOTICE]   (249234) : path to executable is /usr/sbin/haproxy
Nov 28 18:18:56 compute-0 neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587[249230]: [WARNING]  (249234) : Exiting Master process...
Nov 28 18:18:56 compute-0 neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587[249230]: [ALERT]    (249234) : Current worker (249236) exited with code 143 (Terminated)
Nov 28 18:18:56 compute-0 neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587[249230]: [WARNING]  (249234) : All workers exited. Exiting... (0)
Nov 28 18:18:56 compute-0 systemd[1]: libpod-be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba.scope: Deactivated successfully.
Nov 28 18:18:56 compute-0 conmon[249230]: conmon be368dee7980f8cee992 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba.scope/container/memory.events
Nov 28 18:18:56 compute-0 podman[249312]: 2025-11-28 18:18:56.471529289 +0000 UTC m=+0.060963256 container died be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.485 189300 INFO nova.virt.libvirt.driver [-] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Instance destroyed successfully.#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.486 189300 DEBUG nova.objects.instance [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lazy-loading 'resources' on Instance uuid b8886654-0bcc-4b6e-a66e-aa6365e827f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba-userdata-shm.mount: Deactivated successfully.
Nov 28 18:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-70b6b2b174b83f398487b70453fc4033bf204a545601b9f032738b948a34b662-merged.mount: Deactivated successfully.
Nov 28 18:18:56 compute-0 podman[249312]: 2025-11-28 18:18:56.517719171 +0000 UTC m=+0.107153138 container cleanup be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.518 189300 DEBUG nova.virt.libvirt.vif [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:18:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-600273819',display_name='tempest-ServerAddressesTestJSON-server-600273819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-600273819',id=10,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:18:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9848e024a7d14a6c9665c58283238c37',ramdisk_id='',reservation_id='r-b24snvz2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-122096787',owner_user_name='tempest-ServerAddressesTestJSON-122096787-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:18:50Z,user_data=None,user_id='d4a66bec161e46a6ba097408338141a1',uuid=b8886654-0bcc-4b6e-a66e-aa6365e827f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "083a607a-fb99-42ad-a35d-408d472897cf", "address": "fa:16:3e:d8:e4:d2", "network": {"id": "767cff4d-c983-406c-a89f-ce8a60b36587", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-310277457-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9848e024a7d14a6c9665c58283238c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap083a607a-fb", "ovs_interfaceid": "083a607a-fb99-42ad-a35d-408d472897cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.519 189300 DEBUG nova.network.os_vif_util [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Converting VIF {"id": "083a607a-fb99-42ad-a35d-408d472897cf", "address": "fa:16:3e:d8:e4:d2", "network": {"id": "767cff4d-c983-406c-a89f-ce8a60b36587", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-310277457-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9848e024a7d14a6c9665c58283238c37", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap083a607a-fb", "ovs_interfaceid": "083a607a-fb99-42ad-a35d-408d472897cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.519 189300 DEBUG nova.network.os_vif_util [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:e4:d2,bridge_name='br-int',has_traffic_filtering=True,id=083a607a-fb99-42ad-a35d-408d472897cf,network=Network(767cff4d-c983-406c-a89f-ce8a60b36587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap083a607a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.520 189300 DEBUG os_vif [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:e4:d2,bridge_name='br-int',has_traffic_filtering=True,id=083a607a-fb99-42ad-a35d-408d472897cf,network=Network(767cff4d-c983-406c-a89f-ce8a60b36587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap083a607a-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.521 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.521 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap083a607a-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.523 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.524 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.525 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.527 189300 INFO os_vif [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:e4:d2,bridge_name='br-int',has_traffic_filtering=True,id=083a607a-fb99-42ad-a35d-408d472897cf,network=Network(767cff4d-c983-406c-a89f-ce8a60b36587),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap083a607a-fb')#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.528 189300 INFO nova.virt.libvirt.driver [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Deleting instance files /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3_del#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.528 189300 INFO nova.virt.libvirt.driver [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Deletion of /var/lib/nova/instances/b8886654-0bcc-4b6e-a66e-aa6365e827f3_del complete#033[00m
Nov 28 18:18:56 compute-0 systemd[1]: libpod-conmon-be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba.scope: Deactivated successfully.
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.587 189300 DEBUG nova.objects.instance [None req-ce144944-406a-4e9e-b7e3-bd44d3b1a49e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lazy-loading 'flavor' on Instance uuid 1b9021c0-08c4-448d-9f6c-a589a543fb4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:56 compute-0 podman[249353]: 2025-11-28 18:18:56.590941086 +0000 UTC m=+0.048658424 container remove be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.598 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[833ed127-f3ea-43e8-a8df-a3702e2ba918]: (4, ('Fri Nov 28 06:18:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587 (be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba)\nbe368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba\nFri Nov 28 06:18:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587 (be368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba)\nbe368dee7980f8cee9929a1637bd14309c014781b538f69fd050a1c3845728ba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.599 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e1dcaed8-997c-48bc-8653-75aedf9c82e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.600 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap767cff4d-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.602 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 kernel: tap767cff4d-c0: left promiscuous mode
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.606 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.609 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[4210f877-f092-4428-be78-d80cec951efd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.618 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.632 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb18d0f-49b4-45bd-a4d6-c0d64049e5b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.634 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[b4891399-decf-4d8a-9ef8-259e44a25b65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.650 189300 DEBUG oslo_concurrency.lockutils [None req-ce144944-406a-4e9e-b7e3-bd44d3b1a49e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.650 189300 DEBUG oslo_concurrency.lockutils [None req-ce144944-406a-4e9e-b7e3-bd44d3b1a49e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquired lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.652 189300 INFO nova.compute.manager [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Took 0.43 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.652 189300 DEBUG oslo.service.loopingcall [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.653 189300 DEBUG nova.compute.manager [-] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.654 189300 DEBUG nova.network.neutron [-] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.653 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[2e81e3f6-8923-4621-bfe1-86e4610e7d97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507397, 'reachable_time': 33187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249368, 'error': None, 'target': 'ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d767cff4d\x2dc983\x2d406c\x2da89f\x2dce8a60b36587.mount: Deactivated successfully.
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.656 106734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-767cff4d-c983-406c-a89f-ce8a60b36587 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 28 18:18:56 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:18:56.657 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[3041b68c-9542-48ca-80f4-5b661d8f9a7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.866 189300 DEBUG nova.compute.manager [req-3c46e5c6-c1a7-4cbd-91b6-9ec33aa39292 req-3d47f081-20fa-4d59-a6d2-f09d6d1780c8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Received event network-vif-unplugged-083a607a-fb99-42ad-a35d-408d472897cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.866 189300 DEBUG oslo_concurrency.lockutils [req-3c46e5c6-c1a7-4cbd-91b6-9ec33aa39292 req-3d47f081-20fa-4d59-a6d2-f09d6d1780c8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.866 189300 DEBUG oslo_concurrency.lockutils [req-3c46e5c6-c1a7-4cbd-91b6-9ec33aa39292 req-3d47f081-20fa-4d59-a6d2-f09d6d1780c8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.866 189300 DEBUG oslo_concurrency.lockutils [req-3c46e5c6-c1a7-4cbd-91b6-9ec33aa39292 req-3d47f081-20fa-4d59-a6d2-f09d6d1780c8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.867 189300 DEBUG nova.compute.manager [req-3c46e5c6-c1a7-4cbd-91b6-9ec33aa39292 req-3d47f081-20fa-4d59-a6d2-f09d6d1780c8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] No waiting events found dispatching network-vif-unplugged-083a607a-fb99-42ad-a35d-408d472897cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:18:56 compute-0 nova_compute[189296]: 2025-11-28 18:18:56.867 189300 DEBUG nova.compute.manager [req-3c46e5c6-c1a7-4cbd-91b6-9ec33aa39292 req-3d47f081-20fa-4d59-a6d2-f09d6d1780c8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Received event network-vif-unplugged-083a607a-fb99-42ad-a35d-408d472897cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.019 189300 DEBUG nova.network.neutron [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Successfully updated port: 7a69f46e-77c5-4129-9783-254170a7422b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.041 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.042 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquired lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.042 189300 DEBUG nova.network.neutron [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.265 189300 DEBUG nova.compute.manager [req-5ba8d70e-4c6f-4c58-bc1d-a23f0f551611 req-d568c813-4fa6-4908-b57c-7c47c6271b59 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Received event network-changed-7a69f46e-77c5-4129-9783-254170a7422b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.266 189300 DEBUG nova.compute.manager [req-5ba8d70e-4c6f-4c58-bc1d-a23f0f551611 req-d568c813-4fa6-4908-b57c-7c47c6271b59 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Refreshing instance network info cache due to event network-changed-7a69f46e-77c5-4129-9783-254170a7422b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.266 189300 DEBUG oslo_concurrency.lockutils [req-5ba8d70e-4c6f-4c58-bc1d-a23f0f551611 req-d568c813-4fa6-4908-b57c-7c47c6271b59 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.402 189300 DEBUG nova.network.neutron [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.423 189300 DEBUG nova.network.neutron [-] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.451 189300 INFO nova.compute.manager [-] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Took 1.80 seconds to deallocate network for instance.#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.521 189300 DEBUG oslo_concurrency.lockutils [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.521 189300 DEBUG oslo_concurrency.lockutils [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.610 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.618 189300 DEBUG nova.compute.provider_tree [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.635 189300 DEBUG nova.scheduler.client.report [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.681 189300 DEBUG oslo_concurrency.lockutils [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.734 189300 INFO nova.scheduler.client.report [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Deleted allocations for instance b8886654-0bcc-4b6e-a66e-aa6365e827f3#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.799 189300 DEBUG oslo_concurrency.lockutils [None req-83dc5172-bfc2-4bc3-9fa9-e9f1fb83b4b0 d4a66bec161e46a6ba097408338141a1 9848e024a7d14a6c9665c58283238c37 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.986 189300 DEBUG nova.compute.manager [req-66b5465c-9eef-46d2-a105-ea7fcc82c894 req-404060e5-df52-4af3-9d88-a8e6acbdc997 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Received event network-vif-plugged-083a607a-fb99-42ad-a35d-408d472897cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.986 189300 DEBUG oslo_concurrency.lockutils [req-66b5465c-9eef-46d2-a105-ea7fcc82c894 req-404060e5-df52-4af3-9d88-a8e6acbdc997 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.987 189300 DEBUG oslo_concurrency.lockutils [req-66b5465c-9eef-46d2-a105-ea7fcc82c894 req-404060e5-df52-4af3-9d88-a8e6acbdc997 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.990 189300 DEBUG oslo_concurrency.lockutils [req-66b5465c-9eef-46d2-a105-ea7fcc82c894 req-404060e5-df52-4af3-9d88-a8e6acbdc997 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "b8886654-0bcc-4b6e-a66e-aa6365e827f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.995 189300 DEBUG nova.compute.manager [req-66b5465c-9eef-46d2-a105-ea7fcc82c894 req-404060e5-df52-4af3-9d88-a8e6acbdc997 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] No waiting events found dispatching network-vif-plugged-083a607a-fb99-42ad-a35d-408d472897cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:18:58 compute-0 nova_compute[189296]: 2025-11-28 18:18:58.995 189300 WARNING nova.compute.manager [req-66b5465c-9eef-46d2-a105-ea7fcc82c894 req-404060e5-df52-4af3-9d88-a8e6acbdc997 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Received unexpected event network-vif-plugged-083a607a-fb99-42ad-a35d-408d472897cf for instance with vm_state deleted and task_state None.#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.476 189300 DEBUG nova.network.neutron [None req-ce144944-406a-4e9e-b7e3-bd44d3b1a49e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.575 189300 DEBUG nova.network.neutron [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Updating instance_info_cache with network_info: [{"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.606 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Releasing lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.607 189300 DEBUG nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Instance network_info: |[{"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.607 189300 DEBUG oslo_concurrency.lockutils [req-5ba8d70e-4c6f-4c58-bc1d-a23f0f551611 req-d568c813-4fa6-4908-b57c-7c47c6271b59 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.607 189300 DEBUG nova.network.neutron [req-5ba8d70e-4c6f-4c58-bc1d-a23f0f551611 req-d568c813-4fa6-4908-b57c-7c47c6271b59 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Refreshing network info cache for port 7a69f46e-77c5-4129-9783-254170a7422b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.609 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Start _get_guest_xml network_info=[{"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.617 189300 WARNING nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.632 189300 DEBUG nova.virt.libvirt.host [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.633 189300 DEBUG nova.virt.libvirt.host [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.645 189300 DEBUG nova.virt.libvirt.host [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.646 189300 DEBUG nova.virt.libvirt.host [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.647 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.647 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.648 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.648 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.648 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.648 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.649 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.649 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.649 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.649 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.650 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.650 189300 DEBUG nova.virt.hardware [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.654 189300 DEBUG nova.virt.libvirt.vif [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:18:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-908375146',display_name='tempest-TestNetworkBasicOps-server-908375146',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-908375146',id=11,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN6RYNuMt0ux6thdsomjwa4Qs3aHYbmEffy0T9nTP+KpV9lW5YOnUFrYqthp/EVQN7jr7eca+MHb2GG22h2Znvet440rtEqhcxFnCX0g2QQ1dII6j+XnRVx4kNOEKGv/ow==',key_name='tempest-TestNetworkBasicOps-844617280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c41bbf2b30ca428fbd489c3dc29e8045',ramdisk_id='',reservation_id='r-b39009u9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-543144913',owner_user_name='tempest-TestNetworkBasicOps-543144913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:18:53Z,user_data=None,user_id='0052e0d91c7e4c98bd11644a4dca818a',uuid=0af9c8e6-8030-462a-9dfd-d52f041685f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.655 189300 DEBUG nova.network.os_vif_util [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converting VIF {"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.655 189300 DEBUG nova.network.os_vif_util [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:0d:59,bridge_name='br-int',has_traffic_filtering=True,id=7a69f46e-77c5-4129-9783-254170a7422b,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a69f46e-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.657 189300 DEBUG nova.objects.instance [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0af9c8e6-8030-462a-9dfd-d52f041685f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.705 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <uuid>0af9c8e6-8030-462a-9dfd-d52f041685f5</uuid>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <name>instance-0000000b</name>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <nova:name>tempest-TestNetworkBasicOps-server-908375146</nova:name>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:18:59</nova:creationTime>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:18:59 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:18:59 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:18:59 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:18:59 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:18:59 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:18:59 compute-0 nova_compute[189296]:        <nova:user uuid="0052e0d91c7e4c98bd11644a4dca818a">tempest-TestNetworkBasicOps-543144913-project-member</nova:user>
Nov 28 18:18:59 compute-0 nova_compute[189296]:        <nova:project uuid="c41bbf2b30ca428fbd489c3dc29e8045">tempest-TestNetworkBasicOps-543144913</nova:project>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="ffec9e61-65fb-46ae-8d34-338639229ec3"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:18:59 compute-0 nova_compute[189296]:        <nova:port uuid="7a69f46e-77c5-4129-9783-254170a7422b">
Nov 28 18:18:59 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <system>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <entry name="serial">0af9c8e6-8030-462a-9dfd-d52f041685f5</entry>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <entry name="uuid">0af9c8e6-8030-462a-9dfd-d52f041685f5</entry>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    </system>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <os>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  </os>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <features>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  </features>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk.config"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:45:0d:59"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <target dev="tap7a69f46e-77"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/console.log" append="off"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <video>
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    </video>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:18:59 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:18:59 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:18:59 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:18:59 compute-0 nova_compute[189296]: </domain>
Nov 28 18:18:59 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.706 189300 DEBUG nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Preparing to wait for external event network-vif-plugged-7a69f46e-77c5-4129-9783-254170a7422b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.706 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.707 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.707 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.708 189300 DEBUG nova.virt.libvirt.vif [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:18:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-908375146',display_name='tempest-TestNetworkBasicOps-server-908375146',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-908375146',id=11,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN6RYNuMt0ux6thdsomjwa4Qs3aHYbmEffy0T9nTP+KpV9lW5YOnUFrYqthp/EVQN7jr7eca+MHb2GG22h2Znvet440rtEqhcxFnCX0g2QQ1dII6j+XnRVx4kNOEKGv/ow==',key_name='tempest-TestNetworkBasicOps-844617280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c41bbf2b30ca428fbd489c3dc29e8045',ramdisk_id='',reservation_id='r-b39009u9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-543144913',owner_user_name='tempest-TestNetworkBasicOps-543144913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:18:53Z,user_data=None,user_id='0052e0d91c7e4c98bd11644a4dca818a',uuid=0af9c8e6-8030-462a-9dfd-d52f041685f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.708 189300 DEBUG nova.network.os_vif_util [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converting VIF {"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.709 189300 DEBUG nova.network.os_vif_util [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:0d:59,bridge_name='br-int',has_traffic_filtering=True,id=7a69f46e-77c5-4129-9783-254170a7422b,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a69f46e-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.710 189300 DEBUG os_vif [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:0d:59,bridge_name='br-int',has_traffic_filtering=True,id=7a69f46e-77c5-4129-9783-254170a7422b,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a69f46e-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.710 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.711 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.711 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.715 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.716 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a69f46e-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.716 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a69f46e-77, col_values=(('external_ids', {'iface-id': '7a69f46e-77c5-4129-9783-254170a7422b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:0d:59', 'vm-uuid': '0af9c8e6-8030-462a-9dfd-d52f041685f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:18:59 compute-0 NetworkManager[56307]: <info>  [1764353939.7195] manager: (tap7a69f46e-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.720 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.727 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.727 189300 INFO os_vif [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:0d:59,bridge_name='br-int',has_traffic_filtering=True,id=7a69f46e-77c5-4129-9783-254170a7422b,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a69f46e-77')#033[00m
Nov 28 18:18:59 compute-0 podman[203494]: time="2025-11-28T18:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:18:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:18:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.984 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.984 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.985 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] No VIF found with MAC fa:16:3e:45:0d:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:18:59 compute-0 nova_compute[189296]: 2025-11-28 18:18:59.985 189300 INFO nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Using config drive#033[00m
Nov 28 18:19:00 compute-0 nova_compute[189296]: 2025-11-28 18:19:00.399 189300 DEBUG nova.compute.manager [req-7648a199-751f-4afe-a3dc-b2dc91423757 req-01482771-856c-417b-9ad8-f7d0730df2c7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Received event network-vif-deleted-083a607a-fb99-42ad-a35d-408d472897cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:00 compute-0 nova_compute[189296]: 2025-11-28 18:19:00.690 189300 INFO nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Creating config drive at /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk.config#033[00m
Nov 28 18:19:00 compute-0 nova_compute[189296]: 2025-11-28 18:19:00.695 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp64rlpzzs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:00 compute-0 nova_compute[189296]: 2025-11-28 18:19:00.816 189300 DEBUG oslo_concurrency.processutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp64rlpzzs" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:00 compute-0 kernel: tap7a69f46e-77: entered promiscuous mode
Nov 28 18:19:00 compute-0 NetworkManager[56307]: <info>  [1764353940.8714] manager: (tap7a69f46e-77): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Nov 28 18:19:00 compute-0 nova_compute[189296]: 2025-11-28 18:19:00.871 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:00 compute-0 ovn_controller[97771]: 2025-11-28T18:19:00Z|00113|binding|INFO|Claiming lport 7a69f46e-77c5-4129-9783-254170a7422b for this chassis.
Nov 28 18:19:00 compute-0 ovn_controller[97771]: 2025-11-28T18:19:00Z|00114|binding|INFO|7a69f46e-77c5-4129-9783-254170a7422b: Claiming fa:16:3e:45:0d:59 10.100.0.9
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.886 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:0d:59 10.100.0.9'], port_security=['fa:16:3e:45:0d:59 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0af9c8e6-8030-462a-9dfd-d52f041685f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c41bbf2b30ca428fbd489c3dc29e8045', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56edd6d4-5886-44e5-ba5f-f7a536fc1148', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7149c56-1986-4c48-b442-f7c364e29e84, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=7a69f46e-77c5-4129-9783-254170a7422b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:19:00 compute-0 nova_compute[189296]: 2025-11-28 18:19:00.887 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.887 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 7a69f46e-77c5-4129-9783-254170a7422b in datapath 16e2cef3-e4a2-4570-962f-fcbf9f3d2577 bound to our chassis#033[00m
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.889 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16e2cef3-e4a2-4570-962f-fcbf9f3d2577#033[00m
Nov 28 18:19:00 compute-0 ovn_controller[97771]: 2025-11-28T18:19:00Z|00115|binding|INFO|Setting lport 7a69f46e-77c5-4129-9783-254170a7422b ovn-installed in OVS
Nov 28 18:19:00 compute-0 ovn_controller[97771]: 2025-11-28T18:19:00Z|00116|binding|INFO|Setting lport 7a69f46e-77c5-4129-9783-254170a7422b up in Southbound
Nov 28 18:19:00 compute-0 nova_compute[189296]: 2025-11-28 18:19:00.890 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.901 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[24681f65-8172-4a92-b75f-5b3ce690f366]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.902 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16e2cef3-e1 in ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.904 238909 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16e2cef3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.904 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[52e00845-7612-4811-9ae0-d03b882af190]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.905 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1f1499-01c8-4ec2-a438-ccfd3916c9ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:00 compute-0 systemd-udevd[249390]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.916 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[36eb2bea-967b-492d-830f-2f035307fbfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:00 compute-0 NetworkManager[56307]: <info>  [1764353940.9225] device (tap7a69f46e-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:19:00 compute-0 systemd-machined[155703]: New machine qemu-11-instance-0000000b.
Nov 28 18:19:00 compute-0 NetworkManager[56307]: <info>  [1764353940.9239] device (tap7a69f46e-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:19:00 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.941 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[62185e36-ea42-4d87-8b74-79892154c4c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.969 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[55f87318-f8a7-485c-a929-2dfaafa2ff73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:00 compute-0 systemd-udevd[249395]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:19:00 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:00.975 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[88d28d22-65b5-40bb-a7db-456d20ff49e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:00 compute-0 NetworkManager[56307]: <info>  [1764353940.9763] manager: (tap16e2cef3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.011 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[18e0a369-c25c-48d2-8ace-34a9b3c22fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.015 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[78827868-7dd1-4289-919f-d93872d681e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:01 compute-0 NetworkManager[56307]: <info>  [1764353941.0380] device (tap16e2cef3-e0): carrier: link connected
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.043 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[51a76e24-11eb-4211-bddd-016c5e361530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.060 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[a777ef21-63c4-42ac-850d-3954da57f072]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16e2cef3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:52:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508663, 'reachable_time': 44292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249423, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.074 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[a499d212-d775-41d1-85b0-a02fe810ab59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:52b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508663, 'tstamp': 508663}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249424, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.091 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[4a271527-7353-4db7-9e3c-41e97e13d831]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16e2cef3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:52:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508663, 'reachable_time': 44292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249425, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.119 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[86ff3647-785a-484d-a2be-47071f869a66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.181 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[520e643c-3b38-49b7-ad9d-9b59b26d7549]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.183 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16e2cef3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.183 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.184 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16e2cef3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.187 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:01 compute-0 kernel: tap16e2cef3-e0: entered promiscuous mode
Nov 28 18:19:01 compute-0 NetworkManager[56307]: <info>  [1764353941.1882] manager: (tap16e2cef3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.189 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.190 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16e2cef3-e0, col_values=(('external_ids', {'iface-id': 'fadccca5-e309-4390-a64b-6711ee103450'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.191 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.193 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:01 compute-0 ovn_controller[97771]: 2025-11-28T18:19:01Z|00117|binding|INFO|Releasing lport fadccca5-e309-4390-a64b-6711ee103450 from this chassis (sb_readonly=0)
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.200 106624 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16e2cef3-e4a2-4570-962f-fcbf9f3d2577.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16e2cef3-e4a2-4570-962f-fcbf9f3d2577.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.202 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[abe6502c-0058-4a61-82f8-8f26571e3767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.202 106624 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: global
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    log         /dev/log local0 debug
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    log-tag     haproxy-metadata-proxy-16e2cef3-e4a2-4570-962f-fcbf9f3d2577
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    user        root
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    group       root
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    maxconn     1024
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    pidfile     /var/lib/neutron/external/pids/16e2cef3-e4a2-4570-962f-fcbf9f3d2577.pid.haproxy
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    daemon
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: defaults
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    log global
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    mode http
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    option httplog
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    option dontlognull
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    option http-server-close
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    option forwardfor
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    retries                 3
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    timeout http-request    30s
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    timeout connect         30s
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    timeout client          32s
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    timeout server          32s
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    timeout http-keep-alive 30s
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: listen listener
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    bind 169.254.169.254:80
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    server metadata /var/lib/neutron/metadata_proxy
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]:    http-request add-header X-OVN-Network-ID 16e2cef3-e4a2-4570-962f-fcbf9f3d2577
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 28 18:19:01 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:01.203 106624 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'env', 'PROCESS_TAG=haproxy-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16e2cef3-e4a2-4570-962f-fcbf9f3d2577.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.211 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:01 compute-0 openstack_network_exporter[205632]: ERROR   18:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:19:01 compute-0 openstack_network_exporter[205632]: ERROR   18:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:19:01 compute-0 openstack_network_exporter[205632]: ERROR   18:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:19:01 compute-0 openstack_network_exporter[205632]: ERROR   18:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:19:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:19:01 compute-0 openstack_network_exporter[205632]: ERROR   18:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:19:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.426 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353941.4262547, 0af9c8e6-8030-462a-9dfd-d52f041685f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.427 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] VM Started (Lifecycle Event)#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.484 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.492 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353941.426376, 0af9c8e6-8030-462a-9dfd-d52f041685f5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.492 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.512 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.516 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.540 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:19:01 compute-0 podman[249464]: 2025-11-28 18:19:01.577582297 +0000 UTC m=+0.055670746 container create 85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:19:01 compute-0 systemd[1]: Started libpod-conmon-85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73.scope.
Nov 28 18:19:01 compute-0 podman[249464]: 2025-11-28 18:19:01.551567159 +0000 UTC m=+0.029655638 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 18:19:01 compute-0 systemd[1]: Started libcrun container.
Nov 28 18:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccba47ac12bdffb2848beb1b0c5db2cd405195e097963e9a889133883e65f702/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 18:19:01 compute-0 podman[249464]: 2025-11-28 18:19:01.672598526 +0000 UTC m=+0.150686995 container init 85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:19:01 compute-0 podman[249464]: 2025-11-28 18:19:01.680874599 +0000 UTC m=+0.158963048 container start 85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 28 18:19:01 compute-0 podman[249476]: 2025-11-28 18:19:01.686735662 +0000 UTC m=+0.068927280 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:19:01 compute-0 neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577[249486]: [NOTICE]   (249508) : New worker (249510) forked
Nov 28 18:19:01 compute-0 neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577[249486]: [NOTICE]   (249508) : Loading success.
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.736 189300 DEBUG nova.compute.manager [req-d2b01b66-e551-4d60-9e6d-1ac65e1dd87a req-6244739c-315b-4442-88b8-3e4d2acfb418 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received event network-changed-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.736 189300 DEBUG nova.compute.manager [req-d2b01b66-e551-4d60-9e6d-1ac65e1dd87a req-6244739c-315b-4442-88b8-3e4d2acfb418 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Refreshing instance network info cache due to event network-changed-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:19:01 compute-0 nova_compute[189296]: 2025-11-28 18:19:01.736 189300 DEBUG oslo_concurrency.lockutils [req-d2b01b66-e551-4d60-9e6d-1ac65e1dd87a req-6244739c-315b-4442-88b8-3e4d2acfb418 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:19:02 compute-0 nova_compute[189296]: 2025-11-28 18:19:02.731 189300 DEBUG nova.network.neutron [req-5ba8d70e-4c6f-4c58-bc1d-a23f0f551611 req-d568c813-4fa6-4908-b57c-7c47c6271b59 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Updated VIF entry in instance network info cache for port 7a69f46e-77c5-4129-9783-254170a7422b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:19:02 compute-0 nova_compute[189296]: 2025-11-28 18:19:02.731 189300 DEBUG nova.network.neutron [req-5ba8d70e-4c6f-4c58-bc1d-a23f0f551611 req-d568c813-4fa6-4908-b57c-7c47c6271b59 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Updating instance_info_cache with network_info: [{"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:02 compute-0 nova_compute[189296]: 2025-11-28 18:19:02.750 189300 DEBUG oslo_concurrency.lockutils [req-5ba8d70e-4c6f-4c58-bc1d-a23f0f551611 req-d568c813-4fa6-4908-b57c-7c47c6271b59 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:02 compute-0 nova_compute[189296]: 2025-11-28 18:19:02.807 189300 DEBUG nova.network.neutron [None req-ce144944-406a-4e9e-b7e3-bd44d3b1a49e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updating instance_info_cache with network_info: [{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:02 compute-0 nova_compute[189296]: 2025-11-28 18:19:02.861 189300 DEBUG oslo_concurrency.lockutils [None req-ce144944-406a-4e9e-b7e3-bd44d3b1a49e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Releasing lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:02 compute-0 nova_compute[189296]: 2025-11-28 18:19:02.862 189300 DEBUG nova.compute.manager [None req-ce144944-406a-4e9e-b7e3-bd44d3b1a49e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 28 18:19:02 compute-0 nova_compute[189296]: 2025-11-28 18:19:02.862 189300 DEBUG nova.compute.manager [None req-ce144944-406a-4e9e-b7e3-bd44d3b1a49e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] network_info to inject: |[{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 28 18:19:02 compute-0 nova_compute[189296]: 2025-11-28 18:19:02.864 189300 DEBUG oslo_concurrency.lockutils [req-d2b01b66-e551-4d60-9e6d-1ac65e1dd87a req-6244739c-315b-4442-88b8-3e4d2acfb418 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:19:02 compute-0 nova_compute[189296]: 2025-11-28 18:19:02.864 189300 DEBUG nova.network.neutron [req-d2b01b66-e551-4d60-9e6d-1ac65e1dd87a req-6244739c-315b-4442-88b8-3e4d2acfb418 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Refreshing network info cache for port c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:19:03 compute-0 nova_compute[189296]: 2025-11-28 18:19:03.612 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:03 compute-0 ovn_controller[97771]: 2025-11-28T18:19:03Z|00118|binding|INFO|Releasing lport c8eddf3b-1e0b-416b-ad1a-748f52f665f0 from this chassis (sb_readonly=0)
Nov 28 18:19:03 compute-0 ovn_controller[97771]: 2025-11-28T18:19:03Z|00119|binding|INFO|Releasing lport fadccca5-e309-4390-a64b-6711ee103450 from this chassis (sb_readonly=0)
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.016 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.478 189300 DEBUG nova.compute.manager [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Received event network-vif-plugged-7a69f46e-77c5-4129-9783-254170a7422b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.478 189300 DEBUG oslo_concurrency.lockutils [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.478 189300 DEBUG oslo_concurrency.lockutils [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.479 189300 DEBUG oslo_concurrency.lockutils [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.479 189300 DEBUG nova.compute.manager [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Processing event network-vif-plugged-7a69f46e-77c5-4129-9783-254170a7422b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.479 189300 DEBUG nova.compute.manager [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Received event network-vif-plugged-7a69f46e-77c5-4129-9783-254170a7422b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.479 189300 DEBUG oslo_concurrency.lockutils [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.479 189300 DEBUG oslo_concurrency.lockutils [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.480 189300 DEBUG oslo_concurrency.lockutils [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.480 189300 DEBUG nova.compute.manager [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] No waiting events found dispatching network-vif-plugged-7a69f46e-77c5-4129-9783-254170a7422b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.480 189300 WARNING nova.compute.manager [req-9e8f7e93-d5e0-4f68-93d1-0b9e2f456cc5 req-30be40a4-9c2f-47c3-ba0a-ca4ad251f370 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Received unexpected event network-vif-plugged-7a69f46e-77c5-4129-9783-254170a7422b for instance with vm_state building and task_state spawning.#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.480 189300 DEBUG nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.487 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353944.4867885, 0af9c8e6-8030-462a-9dfd-d52f041685f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.487 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.489 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.495 189300 INFO nova.virt.libvirt.driver [-] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Instance spawned successfully.#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.495 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.512 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.518 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.601 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.602 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.603 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.603 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.604 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.604 189300 DEBUG nova.virt.libvirt.driver [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.683 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.721 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.723 189300 INFO nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Took 10.80 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.724 189300 DEBUG nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.792 189300 INFO nova.compute.manager [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Took 11.92 seconds to build instance.#033[00m
Nov 28 18:19:04 compute-0 nova_compute[189296]: 2025-11-28 18:19:04.983 189300 DEBUG oslo_concurrency.lockutils [None req-7b22ef93-9d47-4466-aef1-b2fa8121dfe0 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:05 compute-0 nova_compute[189296]: 2025-11-28 18:19:05.512 189300 DEBUG nova.objects.instance [None req-ae147392-97ad-4064-b8d0-05b0f98ec97e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lazy-loading 'flavor' on Instance uuid 1b9021c0-08c4-448d-9f6c-a589a543fb4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:19:05 compute-0 nova_compute[189296]: 2025-11-28 18:19:05.541 189300 DEBUG oslo_concurrency.lockutils [None req-ae147392-97ad-4064-b8d0-05b0f98ec97e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.277 189300 DEBUG nova.network.neutron [req-d2b01b66-e551-4d60-9e6d-1ac65e1dd87a req-6244739c-315b-4442-88b8-3e4d2acfb418 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updated VIF entry in instance network info cache for port c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.278 189300 DEBUG nova.network.neutron [req-d2b01b66-e551-4d60-9e6d-1ac65e1dd87a req-6244739c-315b-4442-88b8-3e4d2acfb418 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updating instance_info_cache with network_info: [{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.469 189300 DEBUG oslo_concurrency.lockutils [req-d2b01b66-e551-4d60-9e6d-1ac65e1dd87a req-6244739c-315b-4442-88b8-3e4d2acfb418 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.470 189300 DEBUG oslo_concurrency.lockutils [None req-ae147392-97ad-4064-b8d0-05b0f98ec97e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquired lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.519 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.520 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.543 189300 DEBUG nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.651 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.652 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.662 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.662 189300 INFO nova.compute.claims [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.862 189300 DEBUG nova.compute.provider_tree [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.884 189300 DEBUG nova.scheduler.client.report [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.915 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.915 189300 DEBUG nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.966 189300 DEBUG nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.966 189300 DEBUG nova.network.neutron [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:19:06 compute-0 nova_compute[189296]: 2025-11-28 18:19:06.985 189300 INFO nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.011 189300 DEBUG nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.102 189300 DEBUG nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.104 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.105 189300 INFO nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Creating image(s)#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.106 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.106 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.107 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.124 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.207 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.209 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.210 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.227 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.299 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.301 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.346 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.347 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.348 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.404 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.405 189300 DEBUG nova.virt.disk.api [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Checking if we can resize image /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.406 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.430 189300 DEBUG nova.policy [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '44a8645b16fc4d99820df9d0c6154195', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ebd016d88464c67abefec4da518674a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.463 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.464 189300 DEBUG nova.virt.disk.api [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Cannot resize image /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.465 189300 DEBUG nova.objects.instance [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lazy-loading 'migration_context' on Instance uuid 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.483 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.484 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Ensure instance console log exists: /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.484 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.485 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:07 compute-0 nova_compute[189296]: 2025-11-28 18:19:07.486 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:08 compute-0 nova_compute[189296]: 2025-11-28 18:19:08.087 189300 DEBUG nova.network.neutron [None req-ae147392-97ad-4064-b8d0-05b0f98ec97e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:19:08 compute-0 nova_compute[189296]: 2025-11-28 18:19:08.208 189300 DEBUG nova.compute.manager [req-7e37b910-e32b-4089-8f49-21cb3dcac1b3 req-8e7530a2-13f0-4a8e-9899-a913463213af 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received event network-changed-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:08 compute-0 nova_compute[189296]: 2025-11-28 18:19:08.209 189300 DEBUG nova.compute.manager [req-7e37b910-e32b-4089-8f49-21cb3dcac1b3 req-8e7530a2-13f0-4a8e-9899-a913463213af 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Refreshing instance network info cache due to event network-changed-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:19:08 compute-0 nova_compute[189296]: 2025-11-28 18:19:08.210 189300 DEBUG oslo_concurrency.lockutils [req-7e37b910-e32b-4089-8f49-21cb3dcac1b3 req-8e7530a2-13f0-4a8e-9899-a913463213af 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:19:08 compute-0 nova_compute[189296]: 2025-11-28 18:19:08.615 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:08 compute-0 nova_compute[189296]: 2025-11-28 18:19:08.720 189300 DEBUG nova.network.neutron [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Successfully created port: 9dd54f15-0412-4387-bc8f-07d1b4702dbb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 18:19:09 compute-0 nova_compute[189296]: 2025-11-28 18:19:09.723 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:10 compute-0 nova_compute[189296]: 2025-11-28 18:19:10.493 189300 DEBUG nova.compute.manager [req-31d5b452-d7ea-412f-8890-4cf8ec98c4e4 req-c1359330-06fd-4031-969b-6f135c4488cd 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Received event network-changed-7a69f46e-77c5-4129-9783-254170a7422b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:10 compute-0 nova_compute[189296]: 2025-11-28 18:19:10.494 189300 DEBUG nova.compute.manager [req-31d5b452-d7ea-412f-8890-4cf8ec98c4e4 req-c1359330-06fd-4031-969b-6f135c4488cd 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Refreshing instance network info cache due to event network-changed-7a69f46e-77c5-4129-9783-254170a7422b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:19:10 compute-0 nova_compute[189296]: 2025-11-28 18:19:10.494 189300 DEBUG oslo_concurrency.lockutils [req-31d5b452-d7ea-412f-8890-4cf8ec98c4e4 req-c1359330-06fd-4031-969b-6f135c4488cd 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:19:10 compute-0 nova_compute[189296]: 2025-11-28 18:19:10.495 189300 DEBUG oslo_concurrency.lockutils [req-31d5b452-d7ea-412f-8890-4cf8ec98c4e4 req-c1359330-06fd-4031-969b-6f135c4488cd 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:19:10 compute-0 nova_compute[189296]: 2025-11-28 18:19:10.496 189300 DEBUG nova.network.neutron [req-31d5b452-d7ea-412f-8890-4cf8ec98c4e4 req-c1359330-06fd-4031-969b-6f135c4488cd 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Refreshing network info cache for port 7a69f46e-77c5-4129-9783-254170a7422b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:19:10 compute-0 nova_compute[189296]: 2025-11-28 18:19:10.741 189300 DEBUG nova.network.neutron [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Successfully updated port: 9dd54f15-0412-4387-bc8f-07d1b4702dbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:19:10 compute-0 nova_compute[189296]: 2025-11-28 18:19:10.762 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:19:10 compute-0 nova_compute[189296]: 2025-11-28 18:19:10.763 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquired lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:19:10 compute-0 nova_compute[189296]: 2025-11-28 18:19:10.764 189300 DEBUG nova.network.neutron [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.012 189300 DEBUG nova.compute.manager [req-a18e5ff5-b98b-4206-bac1-2ef8228002ee req-e874a492-9c5b-469d-a4f6-58e61e9ef8d5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-changed-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.013 189300 DEBUG nova.compute.manager [req-a18e5ff5-b98b-4206-bac1-2ef8228002ee req-e874a492-9c5b-469d-a4f6-58e61e9ef8d5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Refreshing instance network info cache due to event network-changed-9dd54f15-0412-4387-bc8f-07d1b4702dbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.014 189300 DEBUG oslo_concurrency.lockutils [req-a18e5ff5-b98b-4206-bac1-2ef8228002ee req-e874a492-9c5b-469d-a4f6-58e61e9ef8d5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.107 189300 DEBUG nova.network.neutron [None req-ae147392-97ad-4064-b8d0-05b0f98ec97e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updating instance_info_cache with network_info: [{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.145 189300 DEBUG oslo_concurrency.lockutils [None req-ae147392-97ad-4064-b8d0-05b0f98ec97e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Releasing lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.145 189300 DEBUG nova.compute.manager [None req-ae147392-97ad-4064-b8d0-05b0f98ec97e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.146 189300 DEBUG nova.compute.manager [None req-ae147392-97ad-4064-b8d0-05b0f98ec97e f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] network_info to inject: |[{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.149 189300 DEBUG nova.network.neutron [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.151 189300 DEBUG oslo_concurrency.lockutils [req-7e37b910-e32b-4089-8f49-21cb3dcac1b3 req-8e7530a2-13f0-4a8e-9899-a913463213af 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.152 189300 DEBUG nova.network.neutron [req-7e37b910-e32b-4089-8f49-21cb3dcac1b3 req-8e7530a2-13f0-4a8e-9899-a913463213af 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Refreshing network info cache for port c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.482 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764353936.4807246, b8886654-0bcc-4b6e-a66e-aa6365e827f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.483 189300 INFO nova.compute.manager [-] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:19:11 compute-0 nova_compute[189296]: 2025-11-28 18:19:11.506 189300 DEBUG nova.compute.manager [None req-ccfd24e1-6898-481a-9e55-3565a96ecd0b - - - - - -] [instance: b8886654-0bcc-4b6e-a66e-aa6365e827f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.475 189300 DEBUG oslo_concurrency.lockutils [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.476 189300 DEBUG oslo_concurrency.lockutils [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.477 189300 DEBUG oslo_concurrency.lockutils [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.477 189300 DEBUG oslo_concurrency.lockutils [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.478 189300 DEBUG oslo_concurrency.lockutils [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.479 189300 INFO nova.compute.manager [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Terminating instance#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.480 189300 DEBUG nova.compute.manager [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:19:12 compute-0 kernel: tapc1a2ec90-a4 (unregistering): left promiscuous mode
Nov 28 18:19:12 compute-0 NetworkManager[56307]: <info>  [1764353952.5100] device (tapc1a2ec90-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:19:12 compute-0 ovn_controller[97771]: 2025-11-28T18:19:12Z|00120|binding|INFO|Releasing lport c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 from this chassis (sb_readonly=0)
Nov 28 18:19:12 compute-0 ovn_controller[97771]: 2025-11-28T18:19:12Z|00121|binding|INFO|Setting lport c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 down in Southbound
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.531 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:12 compute-0 ovn_controller[97771]: 2025-11-28T18:19:12Z|00122|binding|INFO|Removing iface tapc1a2ec90-a4 ovn-installed in OVS
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.537 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.543 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:70:8b 10.100.0.4'], port_security=['fa:16:3e:3f:70:8b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1b9021c0-08c4-448d-9f6c-a589a543fb4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1532d46-30e4-42ec-9ba7-4dc79dd935a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05214746198d48dea7b8b3617f29cb40', 'neutron:revision_number': '6', 'neutron:security_group_ids': '16efcad3-8c29-4cf4-abbd-eaf90a8b40f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=028cab25-8237-4062-b9d7-d9732783abc5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.546 106624 INFO neutron.agent.ovn.metadata.agent [-] Port c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 in datapath c1532d46-30e4-42ec-9ba7-4dc79dd935a5 unbound from our chassis#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.550 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1532d46-30e4-42ec-9ba7-4dc79dd935a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.552 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[154796bd-2ea0-46d5-afa7-1cce48381e05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.553 106624 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5 namespace which is not needed anymore#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.570 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:12 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 28 18:19:12 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000009.scope: Consumed 36.381s CPU time.
Nov 28 18:19:12 compute-0 systemd-machined[155703]: Machine qemu-8-instance-00000009 terminated.
Nov 28 18:19:12 compute-0 neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5[248440]: [NOTICE]   (248446) : haproxy version is 2.8.14-c23fe91
Nov 28 18:19:12 compute-0 neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5[248440]: [NOTICE]   (248446) : path to executable is /usr/sbin/haproxy
Nov 28 18:19:12 compute-0 neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5[248440]: [WARNING]  (248446) : Exiting Master process...
Nov 28 18:19:12 compute-0 neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5[248440]: [ALERT]    (248446) : Current worker (248449) exited with code 143 (Terminated)
Nov 28 18:19:12 compute-0 neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5[248440]: [WARNING]  (248446) : All workers exited. Exiting... (0)
Nov 28 18:19:12 compute-0 systemd[1]: libpod-95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066.scope: Deactivated successfully.
Nov 28 18:19:12 compute-0 podman[249557]: 2025-11-28 18:19:12.751597924 +0000 UTC m=+0.056090776 container died 95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.760 189300 INFO nova.virt.libvirt.driver [-] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Instance destroyed successfully.#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.761 189300 DEBUG nova.objects.instance [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lazy-loading 'resources' on Instance uuid 1b9021c0-08c4-448d-9f6c-a589a543fb4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:19:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066-userdata-shm.mount: Deactivated successfully.
Nov 28 18:19:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f89de99a7703d0392c1140feaa00e3cd73fc92ce4749cf19375e3a2e5c0d1969-merged.mount: Deactivated successfully.
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.803 189300 DEBUG nova.virt.libvirt.vif [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:17:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-403870488',display_name='tempest-AttachInterfacesUnderV243Test-server-403870488',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-403870488',id=9,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPncI9of+mH+7uV43WSH0h6v0tb4ecdPAqEEgZeWgO3O4t7/yOoQtm5GFO9PNSzxMORfBEH14/GC/3Lk3DyzrmiLz758VzhRyMdlYe9lNVTfz8ynkWxJ/dx+73eKT+nC6g==',key_name='tempest-keypair-20086383',keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:18:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='05214746198d48dea7b8b3617f29cb40',ramdisk_id='',reservation_id='r-7m48njdu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-732631617',owner_user_name='tempest-AttachInterfacesUnderV243Test-732631617-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:19:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f140e7d00b1542d087d5f92a53ef5082',uuid=1b9021c0-08c4-448d-9f6c-a589a543fb4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.805 189300 DEBUG nova.network.os_vif_util [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Converting VIF {"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:19:12 compute-0 podman[249557]: 2025-11-28 18:19:12.805658289 +0000 UTC m=+0.110151141 container cleanup 95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.806 189300 DEBUG nova.network.os_vif_util [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:70:8b,bridge_name='br-int',has_traffic_filtering=True,id=c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6,network=Network(c1532d46-30e4-42ec-9ba7-4dc79dd935a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a2ec90-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.807 189300 DEBUG os_vif [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:70:8b,bridge_name='br-int',has_traffic_filtering=True,id=c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6,network=Network(c1532d46-30e4-42ec-9ba7-4dc79dd935a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a2ec90-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.808 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.809 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1a2ec90-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.811 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.814 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.815 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.817 189300 INFO os_vif [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:70:8b,bridge_name='br-int',has_traffic_filtering=True,id=c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6,network=Network(c1532d46-30e4-42ec-9ba7-4dc79dd935a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a2ec90-a4')#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.818 189300 INFO nova.virt.libvirt.driver [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Deleting instance files /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c_del#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.819 189300 INFO nova.virt.libvirt.driver [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Deletion of /var/lib/nova/instances/1b9021c0-08c4-448d-9f6c-a589a543fb4c_del complete#033[00m
Nov 28 18:19:12 compute-0 systemd[1]: libpod-conmon-95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066.scope: Deactivated successfully.
Nov 28 18:19:12 compute-0 podman[249602]: 2025-11-28 18:19:12.89787085 +0000 UTC m=+0.049221178 container remove 95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.905 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[c761995a-a346-4a0c-bd90-540df2ad3be0]: (4, ('Fri Nov 28 06:19:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5 (95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066)\n95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066\nFri Nov 28 06:19:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5 (95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066)\n95fcddfffa8df6b5158e58c3f329c258f1ab0724ad6b5c4b4c2aa729ff72c066\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.906 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[35c131bf-c4f5-491a-aa55-e9811da103ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.907 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1532d46-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.909 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:12 compute-0 kernel: tapc1532d46-30: left promiscuous mode
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.911 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.914 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ca322a-23e6-4c2d-b631-5848a1af63b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:12 compute-0 nova_compute[189296]: 2025-11-28 18:19:12.926 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.935 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[57f5b399-35c9-4b04-84b6-aff7e1d55e3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.936 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f22341-3f47-41d9-a78f-c6063d6b05a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.952 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbe5937-0c72-4840-a367-d2fec6ecd900]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502758, 'reachable_time': 35851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249616, 'error': None, 'target': 'ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:12 compute-0 systemd[1]: run-netns-ovnmeta\x2dc1532d46\x2d30e4\x2d42ec\x2d9ba7\x2d4dc79dd935a5.mount: Deactivated successfully.
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.956 106734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c1532d46-30e4-42ec-9ba7-4dc79dd935a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 28 18:19:12 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:12.956 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[8638a64c-7fae-4c14-9423-a68549f1e93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.105 189300 INFO nova.compute.manager [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.107 189300 DEBUG oslo.service.loopingcall [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.108 189300 DEBUG nova.compute.manager [-] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.108 189300 DEBUG nova.network.neutron [-] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.331 189300 DEBUG nova.compute.manager [req-3d5b147e-99fa-45c8-bc6a-81d5f16deb4f req-6ac50fc7-4910-4d8c-b5d0-a37a30298434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received event network-vif-unplugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.331 189300 DEBUG oslo_concurrency.lockutils [req-3d5b147e-99fa-45c8-bc6a-81d5f16deb4f req-6ac50fc7-4910-4d8c-b5d0-a37a30298434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.332 189300 DEBUG oslo_concurrency.lockutils [req-3d5b147e-99fa-45c8-bc6a-81d5f16deb4f req-6ac50fc7-4910-4d8c-b5d0-a37a30298434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.332 189300 DEBUG oslo_concurrency.lockutils [req-3d5b147e-99fa-45c8-bc6a-81d5f16deb4f req-6ac50fc7-4910-4d8c-b5d0-a37a30298434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.332 189300 DEBUG nova.compute.manager [req-3d5b147e-99fa-45c8-bc6a-81d5f16deb4f req-6ac50fc7-4910-4d8c-b5d0-a37a30298434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] No waiting events found dispatching network-vif-unplugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.333 189300 DEBUG nova.compute.manager [req-3d5b147e-99fa-45c8-bc6a-81d5f16deb4f req-6ac50fc7-4910-4d8c-b5d0-a37a30298434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received event network-vif-unplugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.334 189300 DEBUG nova.network.neutron [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Updating instance_info_cache with network_info: [{"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.358 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Releasing lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.359 189300 DEBUG nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Instance network_info: |[{"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.364 189300 DEBUG oslo_concurrency.lockutils [req-a18e5ff5-b98b-4206-bac1-2ef8228002ee req-e874a492-9c5b-469d-a4f6-58e61e9ef8d5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.365 189300 DEBUG nova.network.neutron [req-a18e5ff5-b98b-4206-bac1-2ef8228002ee req-e874a492-9c5b-469d-a4f6-58e61e9ef8d5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Refreshing network info cache for port 9dd54f15-0412-4387-bc8f-07d1b4702dbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.369 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Start _get_guest_xml network_info=[{"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.379 189300 WARNING nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.386 189300 DEBUG nova.virt.libvirt.host [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.387 189300 DEBUG nova.virt.libvirt.host [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.391 189300 DEBUG nova.virt.libvirt.host [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.392 189300 DEBUG nova.virt.libvirt.host [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.392 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.393 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.394 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.394 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.394 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.395 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.395 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.396 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.396 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.396 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.397 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.397 189300 DEBUG nova.virt.hardware [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.400 189300 DEBUG nova.virt.libvirt.vif [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-120148377',display_name='tempest-ServerActionsTestJSON-server-120148377',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-120148377',id=12,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNKDhkiMtsztQmvM2gRYqVRTHcsj/9P9Cg/+MCIxNFg5QbGBxNz8mS/LylMSt0qq29jzqRfKycq5Qi4LzakhV4vYbtYARzjXolBVflKv2a5LVTztOBqSNR1wZxrvf10hw==',key_name='tempest-keypair-957693611',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ebd016d88464c67abefec4da518674a',ramdisk_id='',reservation_id='r-jl0w8ww4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1827601863',owner_user_name='tempest-ServerActionsTestJSON-1827601863-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:19:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='44a8645b16fc4d99820df9d0c6154195',uuid=38dd3ba8-0751-41a0-b83f-b49dc0b192c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.401 189300 DEBUG nova.network.os_vif_util [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converting VIF {"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.401 189300 DEBUG nova.network.os_vif_util [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.402 189300 DEBUG nova.objects.instance [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lazy-loading 'pci_devices' on Instance uuid 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.416 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <uuid>38dd3ba8-0751-41a0-b83f-b49dc0b192c6</uuid>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <name>instance-0000000c</name>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <nova:name>tempest-ServerActionsTestJSON-server-120148377</nova:name>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:19:13</nova:creationTime>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:19:13 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:19:13 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:19:13 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:19:13 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:19:13 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:19:13 compute-0 nova_compute[189296]:        <nova:user uuid="44a8645b16fc4d99820df9d0c6154195">tempest-ServerActionsTestJSON-1827601863-project-member</nova:user>
Nov 28 18:19:13 compute-0 nova_compute[189296]:        <nova:project uuid="6ebd016d88464c67abefec4da518674a">tempest-ServerActionsTestJSON-1827601863</nova:project>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="ffec9e61-65fb-46ae-8d34-338639229ec3"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:19:13 compute-0 nova_compute[189296]:        <nova:port uuid="9dd54f15-0412-4387-bc8f-07d1b4702dbb">
Nov 28 18:19:13 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <system>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <entry name="serial">38dd3ba8-0751-41a0-b83f-b49dc0b192c6</entry>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <entry name="uuid">38dd3ba8-0751-41a0-b83f-b49dc0b192c6</entry>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    </system>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <os>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  </os>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <features>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  </features>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.config"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:ad:e5:da"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <target dev="tap9dd54f15-04"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/console.log" append="off"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <video>
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    </video>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:19:13 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:19:13 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:19:13 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:19:13 compute-0 nova_compute[189296]: </domain>
Nov 28 18:19:13 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.417 189300 DEBUG nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Preparing to wait for external event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.417 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.418 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.418 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.419 189300 DEBUG nova.virt.libvirt.vif [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-120148377',display_name='tempest-ServerActionsTestJSON-server-120148377',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-120148377',id=12,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNKDhkiMtsztQmvM2gRYqVRTHcsj/9P9Cg/+MCIxNFg5QbGBxNz8mS/LylMSt0qq29jzqRfKycq5Qi4LzakhV4vYbtYARzjXolBVflKv2a5LVTztOBqSNR1wZxrvf10hw==',key_name='tempest-keypair-957693611',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ebd016d88464c67abefec4da518674a',ramdisk_id='',reservation_id='r-jl0w8ww4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1827601863',owner_user_name='tempest-ServerActionsTestJSON-1827601863-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:19:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='44a8645b16fc4d99820df9d0c6154195',uuid=38dd3ba8-0751-41a0-b83f-b49dc0b192c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.419 189300 DEBUG nova.network.os_vif_util [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converting VIF {"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.420 189300 DEBUG nova.network.os_vif_util [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.420 189300 DEBUG os_vif [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.421 189300 DEBUG nova.network.neutron [req-31d5b452-d7ea-412f-8890-4cf8ec98c4e4 req-c1359330-06fd-4031-969b-6f135c4488cd 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Updated VIF entry in instance network info cache for port 7a69f46e-77c5-4129-9783-254170a7422b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.422 189300 DEBUG nova.network.neutron [req-31d5b452-d7ea-412f-8890-4cf8ec98c4e4 req-c1359330-06fd-4031-969b-6f135c4488cd 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Updating instance_info_cache with network_info: [{"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.423 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.423 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.423 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.427 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.428 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9dd54f15-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.428 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9dd54f15-04, col_values=(('external_ids', {'iface-id': '9dd54f15-0412-4387-bc8f-07d1b4702dbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:e5:da', 'vm-uuid': '38dd3ba8-0751-41a0-b83f-b49dc0b192c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.430 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.431 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:19:13 compute-0 NetworkManager[56307]: <info>  [1764353953.4346] manager: (tap9dd54f15-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.439 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.440 189300 INFO os_vif [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04')#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.452 189300 DEBUG oslo_concurrency.lockutils [req-31d5b452-d7ea-412f-8890-4cf8ec98c4e4 req-c1359330-06fd-4031-969b-6f135c4488cd 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.501 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.502 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.502 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] No VIF found with MAC fa:16:3e:ad:e5:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.502 189300 INFO nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Using config drive#033[00m
Nov 28 18:19:13 compute-0 podman[249619]: 2025-11-28 18:19:13.569937036 +0000 UTC m=+0.095573485 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 18:19:13 compute-0 podman[249620]: 2025-11-28 18:19:13.583788106 +0000 UTC m=+0.105707683 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:19:13 compute-0 podman[249621]: 2025-11-28 18:19:13.59251746 +0000 UTC m=+0.105191120 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3)
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.617 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:13 compute-0 nova_compute[189296]: 2025-11-28 18:19:13.646 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:19:14 compute-0 nova_compute[189296]: 2025-11-28 18:19:14.138 189300 DEBUG nova.network.neutron [req-7e37b910-e32b-4089-8f49-21cb3dcac1b3 req-8e7530a2-13f0-4a8e-9899-a913463213af 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updated VIF entry in instance network info cache for port c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:19:14 compute-0 nova_compute[189296]: 2025-11-28 18:19:14.139 189300 DEBUG nova.network.neutron [req-7e37b910-e32b-4089-8f49-21cb3dcac1b3 req-8e7530a2-13f0-4a8e-9899-a913463213af 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updating instance_info_cache with network_info: [{"id": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "address": "fa:16:3e:3f:70:8b", "network": {"id": "c1532d46-30e4-42ec-9ba7-4dc79dd935a5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1705465512-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05214746198d48dea7b8b3617f29cb40", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a2ec90-a4", "ovs_interfaceid": "c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:14 compute-0 nova_compute[189296]: 2025-11-28 18:19:14.166 189300 DEBUG oslo_concurrency.lockutils [req-7e37b910-e32b-4089-8f49-21cb3dcac1b3 req-8e7530a2-13f0-4a8e-9899-a913463213af 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:14 compute-0 nova_compute[189296]: 2025-11-28 18:19:14.547 189300 INFO nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Creating config drive at /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.config#033[00m
Nov 28 18:19:14 compute-0 nova_compute[189296]: 2025-11-28 18:19:14.552 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp64j2zzh2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:14 compute-0 nova_compute[189296]: 2025-11-28 18:19:14.678 189300 DEBUG oslo_concurrency.processutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp64j2zzh2" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:14 compute-0 kernel: tap9dd54f15-04: entered promiscuous mode
Nov 28 18:19:14 compute-0 systemd-udevd[249537]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:19:14 compute-0 nova_compute[189296]: 2025-11-28 18:19:14.742 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:14 compute-0 NetworkManager[56307]: <info>  [1764353954.7435] manager: (tap9dd54f15-04): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 28 18:19:14 compute-0 ovn_controller[97771]: 2025-11-28T18:19:14Z|00123|binding|INFO|Claiming lport 9dd54f15-0412-4387-bc8f-07d1b4702dbb for this chassis.
Nov 28 18:19:14 compute-0 ovn_controller[97771]: 2025-11-28T18:19:14Z|00124|binding|INFO|9dd54f15-0412-4387-bc8f-07d1b4702dbb: Claiming fa:16:3e:ad:e5:da 10.100.0.8
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.758 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:e5:da 10.100.0.8'], port_security=['fa:16:3e:ad:e5:da 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38dd3ba8-0751-41a0-b83f-b49dc0b192c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ebd016d88464c67abefec4da518674a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '54c85ea7-0279-4254-b89c-237ccce3cf9e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e84ddcd7-545a-4e48-a6ce-b80b286b2303, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=9dd54f15-0412-4387-bc8f-07d1b4702dbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:19:14 compute-0 nova_compute[189296]: 2025-11-28 18:19:14.759 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:14 compute-0 NetworkManager[56307]: <info>  [1764353954.7608] device (tap9dd54f15-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.760 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 9dd54f15-0412-4387-bc8f-07d1b4702dbb in datapath cecb017f-4e6e-4722-8798-5d73232e6fbd bound to our chassis#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.762 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cecb017f-4e6e-4722-8798-5d73232e6fbd#033[00m
Nov 28 18:19:14 compute-0 nova_compute[189296]: 2025-11-28 18:19:14.763 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:14 compute-0 ovn_controller[97771]: 2025-11-28T18:19:14Z|00125|binding|INFO|Setting lport 9dd54f15-0412-4387-bc8f-07d1b4702dbb ovn-installed in OVS
Nov 28 18:19:14 compute-0 ovn_controller[97771]: 2025-11-28T18:19:14Z|00126|binding|INFO|Setting lport 9dd54f15-0412-4387-bc8f-07d1b4702dbb up in Southbound
Nov 28 18:19:14 compute-0 nova_compute[189296]: 2025-11-28 18:19:14.766 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:14 compute-0 NetworkManager[56307]: <info>  [1764353954.7690] device (tap9dd54f15-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.771 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1e93f0-78f8-4fdb-8aab-6fa486a92997]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.772 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcecb017f-41 in ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.774 238909 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcecb017f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.774 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[67f969c2-4927-4cc0-8645-6518e6e4d253]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.776 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[7844a549-3901-4276-bfa2-5fd4bd205f4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.790 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[aee00e66-c2e2-4538-8213-2cb188f16b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 systemd-machined[155703]: New machine qemu-12-instance-0000000c.
Nov 28 18:19:14 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.816 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6d52f1-004c-49aa-b2de-99814f35e327]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.859 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[376d532c-cca3-492f-a280-6a646af91a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.866 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[06a7bdb8-7fa6-422e-8119-44a4f847c5b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 NetworkManager[56307]: <info>  [1764353954.8760] manager: (tapcecb017f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.899 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[64957d72-001d-4600-b9bc-51d9fd7e6721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.903 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[75198a37-6470-4953-917c-9c0a887bb188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 NetworkManager[56307]: <info>  [1764353954.9250] device (tapcecb017f-40): carrier: link connected
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.929 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[b4578274-cf96-4b69-9e80-ea6d35cdb7eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.948 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[0156384a-5079-433a-9905-61ceba31820a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcecb017f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:ab:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510051, 'reachable_time': 24798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249721, 'error': None, 'target': 'ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.962 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[73ddb9c7-ba9d-47e3-a2a4-868e6bb7c498]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:ab55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510051, 'tstamp': 510051}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249722, 'error': None, 'target': 'ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:14 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:14.980 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[70f7e51a-fbb4-4ab8-8767-7ebe6c7a273b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcecb017f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:ab:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510051, 'reachable_time': 24798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249723, 'error': None, 'target': 'ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:15.008 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[78bd417d-ca50-45d9-8028-667d0e302ef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.054 189300 DEBUG nova.compute.manager [req-bdcdff08-5ce4-474c-9845-444287573da7 req-a4c72b89-de85-4841-9eac-4d5d511a8a97 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.054 189300 DEBUG oslo_concurrency.lockutils [req-bdcdff08-5ce4-474c-9845-444287573da7 req-a4c72b89-de85-4841-9eac-4d5d511a8a97 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.054 189300 DEBUG oslo_concurrency.lockutils [req-bdcdff08-5ce4-474c-9845-444287573da7 req-a4c72b89-de85-4841-9eac-4d5d511a8a97 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.054 189300 DEBUG oslo_concurrency.lockutils [req-bdcdff08-5ce4-474c-9845-444287573da7 req-a4c72b89-de85-4841-9eac-4d5d511a8a97 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.054 189300 DEBUG nova.compute.manager [req-bdcdff08-5ce4-474c-9845-444287573da7 req-a4c72b89-de85-4841-9eac-4d5d511a8a97 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Processing event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:15.075 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[903c7ec7-c173-48ff-b89c-7a3402f337e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.076 189300 DEBUG nova.network.neutron [-] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:15.077 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcecb017f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:15.078 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:15.078 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcecb017f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:15 compute-0 kernel: tapcecb017f-40: entered promiscuous mode
Nov 28 18:19:15 compute-0 NetworkManager[56307]: <info>  [1764353955.0820] manager: (tapcecb017f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.081 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.085 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:15.086 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcecb017f-40, col_values=(('external_ids', {'iface-id': '9f681880-a374-4938-a7d7-30fad6716ed2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:15 compute-0 ovn_controller[97771]: 2025-11-28T18:19:15Z|00127|binding|INFO|Releasing lport 9f681880-a374-4938-a7d7-30fad6716ed2 from this chassis (sb_readonly=0)
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.088 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.089 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:15.089 106624 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cecb017f-4e6e-4722-8798-5d73232e6fbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cecb017f-4e6e-4722-8798-5d73232e6fbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:15.090 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[24b226fa-d98a-4798-8c20-379261277697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:15.091 106624 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: global
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    log         /dev/log local0 debug
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    log-tag     haproxy-metadata-proxy-cecb017f-4e6e-4722-8798-5d73232e6fbd
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    user        root
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    group       root
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    maxconn     1024
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    pidfile     /var/lib/neutron/external/pids/cecb017f-4e6e-4722-8798-5d73232e6fbd.pid.haproxy
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    daemon
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: defaults
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    log global
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    mode http
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    option httplog
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    option dontlognull
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    option http-server-close
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    option forwardfor
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    retries                 3
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    timeout http-request    30s
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    timeout connect         30s
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    timeout client          32s
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    timeout server          32s
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    timeout http-keep-alive 30s
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: listen listener
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    bind 169.254.169.254:80
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    server metadata /var/lib/neutron/metadata_proxy
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]:    http-request add-header X-OVN-Network-ID cecb017f-4e6e-4722-8798-5d73232e6fbd
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 28 18:19:15 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:15.094 106624 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'env', 'PROCESS_TAG=haproxy-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cecb017f-4e6e-4722-8798-5d73232e6fbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.105 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.106 189300 INFO nova.compute.manager [-] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Took 2.00 seconds to deallocate network for instance.#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.164 189300 DEBUG oslo_concurrency.lockutils [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.165 189300 DEBUG oslo_concurrency.lockutils [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.182 189300 DEBUG nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.183 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353955.1820424, 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.184 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] VM Started (Lifecycle Event)#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.188 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.194 189300 INFO nova.virt.libvirt.driver [-] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Instance spawned successfully.#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.194 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.205 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.210 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.224 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.224 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.225 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.225 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.226 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.226 189300 DEBUG nova.virt.libvirt.driver [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.231 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.231 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353955.1821945, 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.232 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.336 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.352 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764353955.1867993, 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.353 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.393 189300 DEBUG nova.compute.provider_tree [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:19:15 compute-0 podman[249762]: 2025-11-28 18:19:15.482176175 +0000 UTC m=+0.053231796 container create 740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.499 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.500 189300 DEBUG nova.scheduler.client.report [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.505 189300 INFO nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Took 8.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.505 189300 DEBUG nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.512 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:19:15 compute-0 systemd[1]: Started libpod-conmon-740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54.scope.
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.539 189300 DEBUG oslo_concurrency.lockutils [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.544 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:19:15 compute-0 podman[249762]: 2025-11-28 18:19:15.455359908 +0000 UTC m=+0.026415579 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 18:19:15 compute-0 systemd[1]: Started libcrun container.
Nov 28 18:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00d03a79495d12e13cf428fdecc084acaa7ed8858ecfe91ac8e4fcd68668ad6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 18:19:15 compute-0 podman[249762]: 2025-11-28 18:19:15.577951294 +0000 UTC m=+0.149006955 container init 740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 28 18:19:15 compute-0 podman[249762]: 2025-11-28 18:19:15.584973016 +0000 UTC m=+0.156028647 container start 740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.587 189300 INFO nova.scheduler.client.report [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Deleted allocations for instance 1b9021c0-08c4-448d-9f6c-a589a543fb4c#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.602 189300 INFO nova.compute.manager [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Took 8.99 seconds to build instance.#033[00m
Nov 28 18:19:15 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[249775]: [NOTICE]   (249779) : New worker (249781) forked
Nov 28 18:19:15 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[249775]: [NOTICE]   (249779) : Loading success.
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.622 189300 DEBUG oslo_concurrency.lockutils [None req-67473ed8-092f-4ebf-851c-5dd71b9f7ae7 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.672 189300 DEBUG oslo_concurrency.lockutils [None req-e02795e9-9582-4c93-848f-8e5016547ad4 f140e7d00b1542d087d5f92a53ef5082 05214746198d48dea7b8b3617f29cb40 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.713 189300 DEBUG nova.compute.manager [req-0750e116-4d2d-45f2-a357-89a4e1fd10c3 req-9050f6d2-aa48-4c74-abaf-7df9f4f0b0d7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received event network-vif-plugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.713 189300 DEBUG oslo_concurrency.lockutils [req-0750e116-4d2d-45f2-a357-89a4e1fd10c3 req-9050f6d2-aa48-4c74-abaf-7df9f4f0b0d7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.714 189300 DEBUG oslo_concurrency.lockutils [req-0750e116-4d2d-45f2-a357-89a4e1fd10c3 req-9050f6d2-aa48-4c74-abaf-7df9f4f0b0d7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.714 189300 DEBUG oslo_concurrency.lockutils [req-0750e116-4d2d-45f2-a357-89a4e1fd10c3 req-9050f6d2-aa48-4c74-abaf-7df9f4f0b0d7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "1b9021c0-08c4-448d-9f6c-a589a543fb4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.714 189300 DEBUG nova.compute.manager [req-0750e116-4d2d-45f2-a357-89a4e1fd10c3 req-9050f6d2-aa48-4c74-abaf-7df9f4f0b0d7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] No waiting events found dispatching network-vif-plugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.715 189300 WARNING nova.compute.manager [req-0750e116-4d2d-45f2-a357-89a4e1fd10c3 req-9050f6d2-aa48-4c74-abaf-7df9f4f0b0d7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received unexpected event network-vif-plugged-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 for instance with vm_state deleted and task_state None.#033[00m
Nov 28 18:19:15 compute-0 nova_compute[189296]: 2025-11-28 18:19:15.715 189300 DEBUG nova.compute.manager [req-0750e116-4d2d-45f2-a357-89a4e1fd10c3 req-9050f6d2-aa48-4c74-abaf-7df9f4f0b0d7 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Received event network-vif-deleted-c1a2ec90-a4ff-4504-8c5f-8fdaf2caf6f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:16 compute-0 nova_compute[189296]: 2025-11-28 18:19:16.079 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:19:16 compute-0 nova_compute[189296]: 2025-11-28 18:19:16.080 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:19:16 compute-0 nova_compute[189296]: 2025-11-28 18:19:16.081 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:19:16 compute-0 nova_compute[189296]: 2025-11-28 18:19:16.098 189300 DEBUG nova.compute.utils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010#033[00m
Nov 28 18:19:16 compute-0 nova_compute[189296]: 2025-11-28 18:19:16.396 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.244 189300 DEBUG nova.network.neutron [req-a18e5ff5-b98b-4206-bac1-2ef8228002ee req-e874a492-9c5b-469d-a4f6-58e61e9ef8d5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Updated VIF entry in instance network info cache for port 9dd54f15-0412-4387-bc8f-07d1b4702dbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.245 189300 DEBUG nova.network.neutron [req-a18e5ff5-b98b-4206-bac1-2ef8228002ee req-e874a492-9c5b-469d-a4f6-58e61e9ef8d5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Updating instance_info_cache with network_info: [{"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.275 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.279 189300 DEBUG oslo_concurrency.lockutils [req-a18e5ff5-b98b-4206-bac1-2ef8228002ee req-e874a492-9c5b-469d-a4f6-58e61e9ef8d5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.434 189300 DEBUG nova.compute.manager [req-9a7c22ec-ec88-46d4-9fd8-61eadd46b457 req-a30e7a7c-fead-4664-80d8-a158e1ed5ed8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.435 189300 DEBUG oslo_concurrency.lockutils [req-9a7c22ec-ec88-46d4-9fd8-61eadd46b457 req-a30e7a7c-fead-4664-80d8-a158e1ed5ed8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.435 189300 DEBUG oslo_concurrency.lockutils [req-9a7c22ec-ec88-46d4-9fd8-61eadd46b457 req-a30e7a7c-fead-4664-80d8-a158e1ed5ed8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.436 189300 DEBUG oslo_concurrency.lockutils [req-9a7c22ec-ec88-46d4-9fd8-61eadd46b457 req-a30e7a7c-fead-4664-80d8-a158e1ed5ed8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.436 189300 DEBUG nova.compute.manager [req-9a7c22ec-ec88-46d4-9fd8-61eadd46b457 req-a30e7a7c-fead-4664-80d8-a158e1ed5ed8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] No waiting events found dispatching network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.437 189300 WARNING nova.compute.manager [req-9a7c22ec-ec88-46d4-9fd8-61eadd46b457 req-a30e7a7c-fead-4664-80d8-a158e1ed5ed8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received unexpected event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb for instance with vm_state active and task_state None.#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.492 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-1b9021c0-08c4-448d-9f6c-a589a543fb4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.493 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.494 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:19:17 compute-0 nova_compute[189296]: 2025-11-28 18:19:17.494 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:19:18 compute-0 nova_compute[189296]: 2025-11-28 18:19:18.432 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:18 compute-0 nova_compute[189296]: 2025-11-28 18:19:18.619 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:18 compute-0 nova_compute[189296]: 2025-11-28 18:19:18.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:19:18 compute-0 nova_compute[189296]: 2025-11-28 18:19:18.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:19:19 compute-0 podman[249790]: 2025-11-28 18:19:19.024099488 +0000 UTC m=+0.083802376 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:19:19 compute-0 podman[249791]: 2025-11-28 18:19:19.051048328 +0000 UTC m=+0.106836580 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 28 18:19:19 compute-0 podman[249792]: 2025-11-28 18:19:19.052509204 +0000 UTC m=+0.102975325 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, container_name=kepler, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public)
Nov 28 18:19:19 compute-0 podman[249796]: 2025-11-28 18:19:19.059889506 +0000 UTC m=+0.106640266 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3)
Nov 28 18:19:19 compute-0 nova_compute[189296]: 2025-11-28 18:19:19.991 189300 DEBUG nova.compute.manager [req-9aac887d-1529-4e41-97e1-989e5a8544c0 req-52370d6b-a000-44d0-b7f3-e09481dd3281 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-changed-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:19 compute-0 nova_compute[189296]: 2025-11-28 18:19:19.992 189300 DEBUG nova.compute.manager [req-9aac887d-1529-4e41-97e1-989e5a8544c0 req-52370d6b-a000-44d0-b7f3-e09481dd3281 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Refreshing instance network info cache due to event network-changed-9dd54f15-0412-4387-bc8f-07d1b4702dbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:19:19 compute-0 nova_compute[189296]: 2025-11-28 18:19:19.992 189300 DEBUG oslo_concurrency.lockutils [req-9aac887d-1529-4e41-97e1-989e5a8544c0 req-52370d6b-a000-44d0-b7f3-e09481dd3281 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:19:19 compute-0 nova_compute[189296]: 2025-11-28 18:19:19.992 189300 DEBUG oslo_concurrency.lockutils [req-9aac887d-1529-4e41-97e1-989e5a8544c0 req-52370d6b-a000-44d0-b7f3-e09481dd3281 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:19:19 compute-0 nova_compute[189296]: 2025-11-28 18:19:19.992 189300 DEBUG nova.network.neutron [req-9aac887d-1529-4e41-97e1-989e5a8544c0 req-52370d6b-a000-44d0-b7f3-e09481dd3281 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Refreshing network info cache for port 9dd54f15-0412-4387-bc8f-07d1b4702dbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:19:21 compute-0 nova_compute[189296]: 2025-11-28 18:19:21.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:19:22 compute-0 nova_compute[189296]: 2025-11-28 18:19:22.394 189300 DEBUG nova.network.neutron [req-9aac887d-1529-4e41-97e1-989e5a8544c0 req-52370d6b-a000-44d0-b7f3-e09481dd3281 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Updated VIF entry in instance network info cache for port 9dd54f15-0412-4387-bc8f-07d1b4702dbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:19:22 compute-0 nova_compute[189296]: 2025-11-28 18:19:22.394 189300 DEBUG nova.network.neutron [req-9aac887d-1529-4e41-97e1-989e5a8544c0 req-52370d6b-a000-44d0-b7f3-e09481dd3281 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Updating instance_info_cache with network_info: [{"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:22 compute-0 nova_compute[189296]: 2025-11-28 18:19:22.417 189300 DEBUG oslo_concurrency.lockutils [req-9aac887d-1529-4e41-97e1-989e5a8544c0 req-52370d6b-a000-44d0-b7f3-e09481dd3281 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:23 compute-0 podman[249865]: 2025-11-28 18:19:23.035992491 +0000 UTC m=+0.102018691 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 28 18:19:23 compute-0 nova_compute[189296]: 2025-11-28 18:19:23.434 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:23 compute-0 nova_compute[189296]: 2025-11-28 18:19:23.620 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:23 compute-0 nova_compute[189296]: 2025-11-28 18:19:23.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:19:23 compute-0 nova_compute[189296]: 2025-11-28 18:19:23.716 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:23 compute-0 nova_compute[189296]: 2025-11-28 18:19:23.717 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:23 compute-0 nova_compute[189296]: 2025-11-28 18:19:23.717 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:23 compute-0 nova_compute[189296]: 2025-11-28 18:19:23.718 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.245 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.306 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.307 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.366 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.372 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.428 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.430 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.489 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.809 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.811 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5029MB free_disk=72.34040451049805GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.811 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:24 compute-0 nova_compute[189296]: 2025-11-28 18:19:24.812 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:25 compute-0 nova_compute[189296]: 2025-11-28 18:19:25.394 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:19:25 compute-0 nova_compute[189296]: 2025-11-28 18:19:25.395 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:19:25 compute-0 nova_compute[189296]: 2025-11-28 18:19:25.395 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:19:25 compute-0 nova_compute[189296]: 2025-11-28 18:19:25.396 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:19:25 compute-0 nova_compute[189296]: 2025-11-28 18:19:25.595 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:19:25 compute-0 nova_compute[189296]: 2025-11-28 18:19:25.709 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:19:25 compute-0 nova_compute[189296]: 2025-11-28 18:19:25.740 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:19:25 compute-0 nova_compute[189296]: 2025-11-28 18:19:25.740 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:25 compute-0 ovn_controller[97771]: 2025-11-28T18:19:25Z|00128|binding|INFO|Releasing lport fadccca5-e309-4390-a64b-6711ee103450 from this chassis (sb_readonly=0)
Nov 28 18:19:25 compute-0 ovn_controller[97771]: 2025-11-28T18:19:25Z|00129|binding|INFO|Releasing lport 9f681880-a374-4938-a7d7-30fad6716ed2 from this chassis (sb_readonly=0)
Nov 28 18:19:26 compute-0 nova_compute[189296]: 2025-11-28 18:19:26.081 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:27 compute-0 nova_compute[189296]: 2025-11-28 18:19:27.742 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:19:27 compute-0 nova_compute[189296]: 2025-11-28 18:19:27.744 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:19:27 compute-0 nova_compute[189296]: 2025-11-28 18:19:27.748 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764353952.747406, 1b9021c0-08c4-448d-9f6c-a589a543fb4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:19:27 compute-0 nova_compute[189296]: 2025-11-28 18:19:27.749 189300 INFO nova.compute.manager [-] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:19:27 compute-0 nova_compute[189296]: 2025-11-28 18:19:27.840 189300 DEBUG nova.compute.manager [None req-bf91bdac-2189-4dfd-a239-a2f4432dc669 - - - - - -] [instance: 1b9021c0-08c4-448d-9f6c-a589a543fb4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:19:28 compute-0 nova_compute[189296]: 2025-11-28 18:19:28.437 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:28 compute-0 nova_compute[189296]: 2025-11-28 18:19:28.622 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:29 compute-0 podman[203494]: time="2025-11-28T18:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:19:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30755 "" "Go-http-client/1.1"
Nov 28 18:19:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5250 "" "Go-http-client/1.1"
Nov 28 18:19:31 compute-0 openstack_network_exporter[205632]: ERROR   18:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:19:31 compute-0 openstack_network_exporter[205632]: ERROR   18:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:19:31 compute-0 openstack_network_exporter[205632]: ERROR   18:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:19:31 compute-0 openstack_network_exporter[205632]: ERROR   18:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:19:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:19:31 compute-0 openstack_network_exporter[205632]: ERROR   18:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:19:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:19:32 compute-0 podman[249906]: 2025-11-28 18:19:32.001065166 +0000 UTC m=+0.060863313 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:19:33 compute-0 nova_compute[189296]: 2025-11-28 18:19:33.445 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:33 compute-0 nova_compute[189296]: 2025-11-28 18:19:33.624 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:36 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:36.220 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:19:36 compute-0 nova_compute[189296]: 2025-11-28 18:19:36.224 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:36 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:36.227 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:19:38 compute-0 nova_compute[189296]: 2025-11-28 18:19:38.450 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:38 compute-0 nova_compute[189296]: 2025-11-28 18:19:38.627 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:39 compute-0 ovn_controller[97771]: 2025-11-28T18:19:39Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:0d:59 10.100.0.9
Nov 28 18:19:39 compute-0 ovn_controller[97771]: 2025-11-28T18:19:39Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:0d:59 10.100.0.9
Nov 28 18:19:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:43.232 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:19:43 compute-0 nova_compute[189296]: 2025-11-28 18:19:43.456 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:43 compute-0 nova_compute[189296]: 2025-11-28 18:19:43.630 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:44 compute-0 podman[249950]: 2025-11-28 18:19:44.031352132 +0000 UTC m=+0.073593167 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 28 18:19:44 compute-0 podman[249944]: 2025-11-28 18:19:44.043747396 +0000 UTC m=+0.101718638 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 28 18:19:44 compute-0 podman[249945]: 2025-11-28 18:19:44.053476086 +0000 UTC m=+0.103206535 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true)
Nov 28 18:19:45 compute-0 nova_compute[189296]: 2025-11-28 18:19:45.786 189300 INFO nova.compute.manager [None req-086e6806-8c65-4e2f-a6c1-54637b2f9aaf 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Get console output#033[00m
Nov 28 18:19:45 compute-0 nova_compute[189296]: 2025-11-28 18:19:45.910 238742 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 28 18:19:47 compute-0 nova_compute[189296]: 2025-11-28 18:19:47.908 189300 DEBUG nova.compute.manager [req-b36a5e8a-ee97-4e09-ae51-f1ce7007e5a9 req-41f2e385-6053-484d-a817-12d10ded58ba 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Received event network-changed-7a69f46e-77c5-4129-9783-254170a7422b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:19:47 compute-0 nova_compute[189296]: 2025-11-28 18:19:47.909 189300 DEBUG nova.compute.manager [req-b36a5e8a-ee97-4e09-ae51-f1ce7007e5a9 req-41f2e385-6053-484d-a817-12d10ded58ba 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Refreshing instance network info cache due to event network-changed-7a69f46e-77c5-4129-9783-254170a7422b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:19:47 compute-0 nova_compute[189296]: 2025-11-28 18:19:47.909 189300 DEBUG oslo_concurrency.lockutils [req-b36a5e8a-ee97-4e09-ae51-f1ce7007e5a9 req-41f2e385-6053-484d-a817-12d10ded58ba 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:19:47 compute-0 nova_compute[189296]: 2025-11-28 18:19:47.910 189300 DEBUG oslo_concurrency.lockutils [req-b36a5e8a-ee97-4e09-ae51-f1ce7007e5a9 req-41f2e385-6053-484d-a817-12d10ded58ba 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:19:47 compute-0 nova_compute[189296]: 2025-11-28 18:19:47.910 189300 DEBUG nova.network.neutron [req-b36a5e8a-ee97-4e09-ae51-f1ce7007e5a9 req-41f2e385-6053-484d-a817-12d10ded58ba 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Refreshing network info cache for port 7a69f46e-77c5-4129-9783-254170a7422b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:19:48 compute-0 nova_compute[189296]: 2025-11-28 18:19:48.461 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:48 compute-0 nova_compute[189296]: 2025-11-28 18:19:48.633 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:49 compute-0 nova_compute[189296]: 2025-11-28 18:19:49.769 189300 DEBUG nova.network.neutron [req-b36a5e8a-ee97-4e09-ae51-f1ce7007e5a9 req-41f2e385-6053-484d-a817-12d10ded58ba 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Updated VIF entry in instance network info cache for port 7a69f46e-77c5-4129-9783-254170a7422b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:19:49 compute-0 nova_compute[189296]: 2025-11-28 18:19:49.770 189300 DEBUG nova.network.neutron [req-b36a5e8a-ee97-4e09-ae51-f1ce7007e5a9 req-41f2e385-6053-484d-a817-12d10ded58ba 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Updating instance_info_cache with network_info: [{"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:19:49 compute-0 nova_compute[189296]: 2025-11-28 18:19:49.793 189300 DEBUG oslo_concurrency.lockutils [req-b36a5e8a-ee97-4e09-ae51-f1ce7007e5a9 req-41f2e385-6053-484d-a817-12d10ded58ba 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:19:49 compute-0 podman[250024]: 2025-11-28 18:19:49.914373987 +0000 UTC m=+0.096292345 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 28 18:19:49 compute-0 podman[250023]: 2025-11-28 18:19:49.917056624 +0000 UTC m=+0.109850030 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:19:49 compute-0 podman[250025]: 2025-11-28 18:19:49.943549937 +0000 UTC m=+0.132200191 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543)
Nov 28 18:19:49 compute-0 podman[250026]: 2025-11-28 18:19:49.945404113 +0000 UTC m=+0.128618383 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 28 18:19:50 compute-0 ovn_controller[97771]: 2025-11-28T18:19:50Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:e5:da 10.100.0.8
Nov 28 18:19:50 compute-0 ovn_controller[97771]: 2025-11-28T18:19:50Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:e5:da 10.100.0.8
Nov 28 18:19:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:52.632 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:52.633 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:19:52.634 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:53 compute-0 nova_compute[189296]: 2025-11-28 18:19:53.464 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:53 compute-0 nova_compute[189296]: 2025-11-28 18:19:53.635 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:54 compute-0 podman[250101]: 2025-11-28 18:19:54.076037382 +0000 UTC m=+0.133319970 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:19:57 compute-0 nova_compute[189296]: 2025-11-28 18:19:57.536 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:57 compute-0 nova_compute[189296]: 2025-11-28 18:19:57.536 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:57 compute-0 nova_compute[189296]: 2025-11-28 18:19:57.557 189300 DEBUG nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:19:57 compute-0 nova_compute[189296]: 2025-11-28 18:19:57.678 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:57 compute-0 nova_compute[189296]: 2025-11-28 18:19:57.679 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:57 compute-0 nova_compute[189296]: 2025-11-28 18:19:57.692 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:19:57 compute-0 nova_compute[189296]: 2025-11-28 18:19:57.693 189300 INFO nova.compute.claims [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.091 189300 DEBUG nova.compute.provider_tree [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.114 189300 DEBUG nova.scheduler.client.report [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.171 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.173 189300 DEBUG nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.284 189300 DEBUG nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.285 189300 DEBUG nova.network.neutron [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.316 189300 INFO nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.346 189300 DEBUG nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.466 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.474 189300 DEBUG nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.476 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.477 189300 INFO nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Creating image(s)#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.477 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "/var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.478 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "/var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.479 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "/var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.491 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.532 189300 DEBUG nova.policy [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0052e0d91c7e4c98bd11644a4dca818a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c41bbf2b30ca428fbd489c3dc29e8045', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.569 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.570 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.571 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.584 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.639 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.644 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.646 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.922 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk 1073741824" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.924 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:58 compute-0 nova_compute[189296]: 2025-11-28 18:19:58.925 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.005 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.007 189300 DEBUG nova.virt.disk.api [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Checking if we can resize image /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.008 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.072 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.075 189300 DEBUG nova.virt.disk.api [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Cannot resize image /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.076 189300 DEBUG nova.objects.instance [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.097 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.098 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Ensure instance console log exists: /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.099 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.100 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:19:59 compute-0 nova_compute[189296]: 2025-11-28 18:19:59.101 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:19:59 compute-0 podman[203494]: time="2025-11-28T18:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:19:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30755 "" "Go-http-client/1.1"
Nov 28 18:19:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5253 "" "Go-http-client/1.1"
Nov 28 18:20:00 compute-0 nova_compute[189296]: 2025-11-28 18:20:00.001 189300 DEBUG nova.network.neutron [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Successfully created port: e58535aa-0624-4101-bd81-7c3c483d4ac7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 18:20:00 compute-0 nova_compute[189296]: 2025-11-28 18:20:00.137 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:01 compute-0 openstack_network_exporter[205632]: ERROR   18:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:20:01 compute-0 openstack_network_exporter[205632]: ERROR   18:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:20:01 compute-0 openstack_network_exporter[205632]: ERROR   18:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:20:01 compute-0 openstack_network_exporter[205632]: ERROR   18:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:20:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:20:01 compute-0 openstack_network_exporter[205632]: ERROR   18:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:20:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:20:01 compute-0 nova_compute[189296]: 2025-11-28 18:20:01.863 189300 DEBUG nova.network.neutron [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Successfully updated port: e58535aa-0624-4101-bd81-7c3c483d4ac7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:20:01 compute-0 nova_compute[189296]: 2025-11-28 18:20:01.883 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "refresh_cache-5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:20:01 compute-0 nova_compute[189296]: 2025-11-28 18:20:01.883 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquired lock "refresh_cache-5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:20:01 compute-0 nova_compute[189296]: 2025-11-28 18:20:01.884 189300 DEBUG nova.network.neutron [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:20:02 compute-0 nova_compute[189296]: 2025-11-28 18:20:02.070 189300 DEBUG nova.compute.manager [req-d1f6644c-db29-4b94-9ece-8926d89b20a5 req-c8187c34-b1a1-43ba-af14-75c7dbeda72d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Received event network-changed-e58535aa-0624-4101-bd81-7c3c483d4ac7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:02 compute-0 nova_compute[189296]: 2025-11-28 18:20:02.071 189300 DEBUG nova.compute.manager [req-d1f6644c-db29-4b94-9ece-8926d89b20a5 req-c8187c34-b1a1-43ba-af14-75c7dbeda72d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Refreshing instance network info cache due to event network-changed-e58535aa-0624-4101-bd81-7c3c483d4ac7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:20:02 compute-0 nova_compute[189296]: 2025-11-28 18:20:02.072 189300 DEBUG oslo_concurrency.lockutils [req-d1f6644c-db29-4b94-9ece-8926d89b20a5 req-c8187c34-b1a1-43ba-af14-75c7dbeda72d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:20:02 compute-0 nova_compute[189296]: 2025-11-28 18:20:02.260 189300 DEBUG nova.network.neutron [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:20:03 compute-0 podman[250141]: 2025-11-28 18:20:03.002542296 +0000 UTC m=+0.062881012 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.471 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.641 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.720 189300 DEBUG nova.network.neutron [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Updating instance_info_cache with network_info: [{"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.753 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Releasing lock "refresh_cache-5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.754 189300 DEBUG nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Instance network_info: |[{"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.755 189300 DEBUG oslo_concurrency.lockutils [req-d1f6644c-db29-4b94-9ece-8926d89b20a5 req-c8187c34-b1a1-43ba-af14-75c7dbeda72d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.756 189300 DEBUG nova.network.neutron [req-d1f6644c-db29-4b94-9ece-8926d89b20a5 req-c8187c34-b1a1-43ba-af14-75c7dbeda72d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Refreshing network info cache for port e58535aa-0624-4101-bd81-7c3c483d4ac7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.763 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Start _get_guest_xml network_info=[{"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.772 189300 WARNING nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.780 189300 DEBUG nova.virt.libvirt.host [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.780 189300 DEBUG nova.virt.libvirt.host [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.791 189300 DEBUG nova.virt.libvirt.host [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.792 189300 DEBUG nova.virt.libvirt.host [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.792 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.793 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.794 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.794 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.795 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.795 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.796 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.797 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.797 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.798 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.798 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.799 189300 DEBUG nova.virt.hardware [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.803 189300 DEBUG nova.virt.libvirt.vif [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:19:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1212840995',display_name='tempest-TestNetworkBasicOps-server-1212840995',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1212840995',id=13,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE+8SFa3xQenW7OAm80PYbdQR5vzTm/9Wx8vyjhFMikx/tqkpCAIM9M1XwKxUttxXJbVjGWQJZ3bUpSJJtqa5la3F2ivvclV6oghFm55fNXyqDmtzHesal/acrtB1Knsrw==',key_name='tempest-TestNetworkBasicOps-1843861968',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c41bbf2b30ca428fbd489c3dc29e8045',ramdisk_id='',reservation_id='r-f5ps81n7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-543144913',owner_user_name='tempest-TestNetworkBasicOps-543144913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:19:58Z,user_data=None,user_id='0052e0d91c7e4c98bd11644a4dca818a',uuid=5e570bcf-69d9-41f4-b621-d75ff7b1bd6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.804 189300 DEBUG nova.network.os_vif_util [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converting VIF {"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.805 189300 DEBUG nova.network.os_vif_util [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:25:e6,bridge_name='br-int',has_traffic_filtering=True,id=e58535aa-0624-4101-bd81-7c3c483d4ac7,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape58535aa-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.806 189300 DEBUG nova.objects.instance [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.834 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <uuid>5e570bcf-69d9-41f4-b621-d75ff7b1bd6c</uuid>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <name>instance-0000000d</name>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <nova:name>tempest-TestNetworkBasicOps-server-1212840995</nova:name>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:20:03</nova:creationTime>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:20:03 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:20:03 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:20:03 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:20:03 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:20:03 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:20:03 compute-0 nova_compute[189296]:        <nova:user uuid="0052e0d91c7e4c98bd11644a4dca818a">tempest-TestNetworkBasicOps-543144913-project-member</nova:user>
Nov 28 18:20:03 compute-0 nova_compute[189296]:        <nova:project uuid="c41bbf2b30ca428fbd489c3dc29e8045">tempest-TestNetworkBasicOps-543144913</nova:project>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="ffec9e61-65fb-46ae-8d34-338639229ec3"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:20:03 compute-0 nova_compute[189296]:        <nova:port uuid="e58535aa-0624-4101-bd81-7c3c483d4ac7">
Nov 28 18:20:03 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <system>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <entry name="serial">5e570bcf-69d9-41f4-b621-d75ff7b1bd6c</entry>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <entry name="uuid">5e570bcf-69d9-41f4-b621-d75ff7b1bd6c</entry>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    </system>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <os>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  </os>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <features>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  </features>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk.config"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:39:25:e6"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <target dev="tape58535aa-06"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/console.log" append="off"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <video>
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    </video>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:20:03 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:20:03 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:20:03 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:20:03 compute-0 nova_compute[189296]: </domain>
Nov 28 18:20:03 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.835 189300 DEBUG nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Preparing to wait for external event network-vif-plugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.836 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.836 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.837 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.838 189300 DEBUG nova.virt.libvirt.vif [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:19:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1212840995',display_name='tempest-TestNetworkBasicOps-server-1212840995',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1212840995',id=13,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE+8SFa3xQenW7OAm80PYbdQR5vzTm/9Wx8vyjhFMikx/tqkpCAIM9M1XwKxUttxXJbVjGWQJZ3bUpSJJtqa5la3F2ivvclV6oghFm55fNXyqDmtzHesal/acrtB1Knsrw==',key_name='tempest-TestNetworkBasicOps-1843861968',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c41bbf2b30ca428fbd489c3dc29e8045',ramdisk_id='',reservation_id='r-f5ps81n7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-543144913',owner_user_name='tempest-TestNetworkBasicOps-543144913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:19:58Z,user_data=None,user_id='0052e0d91c7e4c98bd11644a4dca818a',uuid=5e570bcf-69d9-41f4-b621-d75ff7b1bd6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.839 189300 DEBUG nova.network.os_vif_util [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converting VIF {"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.840 189300 DEBUG nova.network.os_vif_util [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:25:e6,bridge_name='br-int',has_traffic_filtering=True,id=e58535aa-0624-4101-bd81-7c3c483d4ac7,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape58535aa-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.840 189300 DEBUG os_vif [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:25:e6,bridge_name='br-int',has_traffic_filtering=True,id=e58535aa-0624-4101-bd81-7c3c483d4ac7,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape58535aa-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.842 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.843 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.843 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.848 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.848 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape58535aa-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.849 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape58535aa-06, col_values=(('external_ids', {'iface-id': 'e58535aa-0624-4101-bd81-7c3c483d4ac7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:25:e6', 'vm-uuid': '5e570bcf-69d9-41f4-b621-d75ff7b1bd6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.850 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:03 compute-0 NetworkManager[56307]: <info>  [1764354003.8527] manager: (tape58535aa-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.852 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.861 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.862 189300 INFO os_vif [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:25:e6,bridge_name='br-int',has_traffic_filtering=True,id=e58535aa-0624-4101-bd81-7c3c483d4ac7,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape58535aa-06')#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.905 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.906 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.906 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] No VIF found with MAC fa:16:3e:39:25:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:20:03 compute-0 nova_compute[189296]: 2025-11-28 18:20:03.906 189300 INFO nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Using config drive#033[00m
Nov 28 18:20:04 compute-0 nova_compute[189296]: 2025-11-28 18:20:04.546 189300 INFO nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Creating config drive at /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk.config#033[00m
Nov 28 18:20:04 compute-0 nova_compute[189296]: 2025-11-28 18:20:04.553 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptml_gk74 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:04 compute-0 nova_compute[189296]: 2025-11-28 18:20:04.676 189300 DEBUG oslo_concurrency.processutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptml_gk74" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:04 compute-0 kernel: tape58535aa-06: entered promiscuous mode
Nov 28 18:20:04 compute-0 NetworkManager[56307]: <info>  [1764354004.7455] manager: (tape58535aa-06): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Nov 28 18:20:04 compute-0 nova_compute[189296]: 2025-11-28 18:20:04.750 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:04 compute-0 ovn_controller[97771]: 2025-11-28T18:20:04Z|00130|binding|INFO|Claiming lport e58535aa-0624-4101-bd81-7c3c483d4ac7 for this chassis.
Nov 28 18:20:04 compute-0 ovn_controller[97771]: 2025-11-28T18:20:04Z|00131|binding|INFO|e58535aa-0624-4101-bd81-7c3c483d4ac7: Claiming fa:16:3e:39:25:e6 10.100.0.4
Nov 28 18:20:04 compute-0 ovn_controller[97771]: 2025-11-28T18:20:04Z|00132|binding|INFO|Setting lport e58535aa-0624-4101-bd81-7c3c483d4ac7 ovn-installed in OVS
Nov 28 18:20:04 compute-0 nova_compute[189296]: 2025-11-28 18:20:04.769 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:04 compute-0 nova_compute[189296]: 2025-11-28 18:20:04.773 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:04 compute-0 systemd-machined[155703]: New machine qemu-13-instance-0000000d.
Nov 28 18:20:04 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Nov 28 18:20:04 compute-0 systemd-udevd[250184]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:20:04 compute-0 NetworkManager[56307]: <info>  [1764354004.8457] device (tape58535aa-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:20:04 compute-0 NetworkManager[56307]: <info>  [1764354004.8526] device (tape58535aa-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:20:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:04.882 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:25:e6 10.100.0.4'], port_security=['fa:16:3e:39:25:e6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5e570bcf-69d9-41f4-b621-d75ff7b1bd6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c41bbf2b30ca428fbd489c3dc29e8045', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bbd1a953-a99e-470b-b1ba-0c8ce7261629', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7149c56-1986-4c48-b442-f7c364e29e84, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=e58535aa-0624-4101-bd81-7c3c483d4ac7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:20:04 compute-0 ovn_controller[97771]: 2025-11-28T18:20:04Z|00133|binding|INFO|Setting lport e58535aa-0624-4101-bd81-7c3c483d4ac7 up in Southbound
Nov 28 18:20:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:04.883 106624 INFO neutron.agent.ovn.metadata.agent [-] Port e58535aa-0624-4101-bd81-7c3c483d4ac7 in datapath 16e2cef3-e4a2-4570-962f-fcbf9f3d2577 bound to our chassis#033[00m
Nov 28 18:20:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:04.885 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16e2cef3-e4a2-4570-962f-fcbf9f3d2577#033[00m
Nov 28 18:20:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:04.900 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[774dfbee-1380-454d-8110-344e8fe65633]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:04.936 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[175fa396-e0da-4107-b0d0-41309fcdb5f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:04.939 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[b559d2e8-02f8-4400-b937-03015c47f15c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:04.973 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[b886d5b1-cd7e-469d-a25d-9283a43025a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:04 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:04.990 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[090e8b5a-ecdf-4d35-a8df-5142010e9c2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16e2cef3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:52:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508663, 'reachable_time': 44292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250198, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:05.009 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e39b7917-da7d-4477-a577-8e8ac73be4f6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap16e2cef3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508674, 'tstamp': 508674}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250199, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16e2cef3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508677, 'tstamp': 508677}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250199, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:05.011 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16e2cef3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:05 compute-0 nova_compute[189296]: 2025-11-28 18:20:05.013 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:05.016 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16e2cef3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:05.017 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:05.017 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16e2cef3-e0, col_values=(('external_ids', {'iface-id': 'fadccca5-e309-4390-a64b-6711ee103450'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:05.017 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:05 compute-0 nova_compute[189296]: 2025-11-28 18:20:05.448 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354005.4474196, 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:20:05 compute-0 nova_compute[189296]: 2025-11-28 18:20:05.448 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] VM Started (Lifecycle Event)#033[00m
Nov 28 18:20:05 compute-0 nova_compute[189296]: 2025-11-28 18:20:05.483 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:20:05 compute-0 nova_compute[189296]: 2025-11-28 18:20:05.489 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354005.4475389, 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:20:05 compute-0 nova_compute[189296]: 2025-11-28 18:20:05.489 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:20:05 compute-0 nova_compute[189296]: 2025-11-28 18:20:05.511 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:20:05 compute-0 nova_compute[189296]: 2025-11-28 18:20:05.516 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:20:05 compute-0 nova_compute[189296]: 2025-11-28 18:20:05.542 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.036 189300 DEBUG nova.compute.manager [req-16324edf-144e-4f4c-9088-a6b9cf140f97 req-7b8a0912-28ae-4380-aabf-e87ab4c85ae5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Received event network-vif-plugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.037 189300 DEBUG oslo_concurrency.lockutils [req-16324edf-144e-4f4c-9088-a6b9cf140f97 req-7b8a0912-28ae-4380-aabf-e87ab4c85ae5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.038 189300 DEBUG oslo_concurrency.lockutils [req-16324edf-144e-4f4c-9088-a6b9cf140f97 req-7b8a0912-28ae-4380-aabf-e87ab4c85ae5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.038 189300 DEBUG oslo_concurrency.lockutils [req-16324edf-144e-4f4c-9088-a6b9cf140f97 req-7b8a0912-28ae-4380-aabf-e87ab4c85ae5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.039 189300 DEBUG nova.compute.manager [req-16324edf-144e-4f4c-9088-a6b9cf140f97 req-7b8a0912-28ae-4380-aabf-e87ab4c85ae5 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Processing event network-vif-plugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.040 189300 DEBUG nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.044 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.045 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354006.0445936, 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.046 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.051 189300 INFO nova.virt.libvirt.driver [-] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Instance spawned successfully.#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.051 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.065 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.076 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.082 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.083 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.083 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.084 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.084 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.085 189300 DEBUG nova.virt.libvirt.driver [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.125 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.171 189300 INFO nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Took 7.70 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.172 189300 DEBUG nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.261 189300 INFO nova.compute.manager [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Took 8.64 seconds to build instance.#033[00m
Nov 28 18:20:06 compute-0 nova_compute[189296]: 2025-11-28 18:20:06.293 189300 DEBUG oslo_concurrency.lockutils [None req-874c6c6b-03ea-458e-a4d3-2d49f721527e 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:07 compute-0 nova_compute[189296]: 2025-11-28 18:20:07.011 189300 DEBUG nova.network.neutron [req-d1f6644c-db29-4b94-9ece-8926d89b20a5 req-c8187c34-b1a1-43ba-af14-75c7dbeda72d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Updated VIF entry in instance network info cache for port e58535aa-0624-4101-bd81-7c3c483d4ac7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:20:07 compute-0 nova_compute[189296]: 2025-11-28 18:20:07.012 189300 DEBUG nova.network.neutron [req-d1f6644c-db29-4b94-9ece-8926d89b20a5 req-c8187c34-b1a1-43ba-af14-75c7dbeda72d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Updating instance_info_cache with network_info: [{"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:20:07 compute-0 nova_compute[189296]: 2025-11-28 18:20:07.026 189300 DEBUG oslo_concurrency.lockutils [req-d1f6644c-db29-4b94-9ece-8926d89b20a5 req-c8187c34-b1a1-43ba-af14-75c7dbeda72d 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:20:08 compute-0 nova_compute[189296]: 2025-11-28 18:20:08.444 189300 DEBUG nova.compute.manager [req-6ff06282-a6b0-407d-ac1e-907628855158 req-4c8367cb-2cad-4b55-8a09-57e9da670ca0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Received event network-vif-plugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:08 compute-0 nova_compute[189296]: 2025-11-28 18:20:08.444 189300 DEBUG oslo_concurrency.lockutils [req-6ff06282-a6b0-407d-ac1e-907628855158 req-4c8367cb-2cad-4b55-8a09-57e9da670ca0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:08 compute-0 nova_compute[189296]: 2025-11-28 18:20:08.445 189300 DEBUG oslo_concurrency.lockutils [req-6ff06282-a6b0-407d-ac1e-907628855158 req-4c8367cb-2cad-4b55-8a09-57e9da670ca0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:08 compute-0 nova_compute[189296]: 2025-11-28 18:20:08.446 189300 DEBUG oslo_concurrency.lockutils [req-6ff06282-a6b0-407d-ac1e-907628855158 req-4c8367cb-2cad-4b55-8a09-57e9da670ca0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:08 compute-0 nova_compute[189296]: 2025-11-28 18:20:08.447 189300 DEBUG nova.compute.manager [req-6ff06282-a6b0-407d-ac1e-907628855158 req-4c8367cb-2cad-4b55-8a09-57e9da670ca0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] No waiting events found dispatching network-vif-plugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:20:08 compute-0 nova_compute[189296]: 2025-11-28 18:20:08.447 189300 WARNING nova.compute.manager [req-6ff06282-a6b0-407d-ac1e-907628855158 req-4c8367cb-2cad-4b55-8a09-57e9da670ca0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Received unexpected event network-vif-plugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 for instance with vm_state active and task_state None.#033[00m
Nov 28 18:20:08 compute-0 nova_compute[189296]: 2025-11-28 18:20:08.645 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:08 compute-0 nova_compute[189296]: 2025-11-28 18:20:08.852 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:10 compute-0 nova_compute[189296]: 2025-11-28 18:20:10.226 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:11 compute-0 nova_compute[189296]: 2025-11-28 18:20:11.584 189300 DEBUG nova.compute.manager [req-240a3adb-04fe-4cc4-9e03-38d1f39079de req-1c98f68b-2dc7-4185-9938-b6f08c1e55d0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Received event network-changed-e58535aa-0624-4101-bd81-7c3c483d4ac7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:11 compute-0 nova_compute[189296]: 2025-11-28 18:20:11.585 189300 DEBUG nova.compute.manager [req-240a3adb-04fe-4cc4-9e03-38d1f39079de req-1c98f68b-2dc7-4185-9938-b6f08c1e55d0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Refreshing instance network info cache due to event network-changed-e58535aa-0624-4101-bd81-7c3c483d4ac7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:20:11 compute-0 nova_compute[189296]: 2025-11-28 18:20:11.586 189300 DEBUG oslo_concurrency.lockutils [req-240a3adb-04fe-4cc4-9e03-38d1f39079de req-1c98f68b-2dc7-4185-9938-b6f08c1e55d0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:20:11 compute-0 nova_compute[189296]: 2025-11-28 18:20:11.586 189300 DEBUG oslo_concurrency.lockutils [req-240a3adb-04fe-4cc4-9e03-38d1f39079de req-1c98f68b-2dc7-4185-9938-b6f08c1e55d0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:20:11 compute-0 nova_compute[189296]: 2025-11-28 18:20:11.587 189300 DEBUG nova.network.neutron [req-240a3adb-04fe-4cc4-9e03-38d1f39079de req-1c98f68b-2dc7-4185-9938-b6f08c1e55d0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Refreshing network info cache for port e58535aa-0624-4101-bd81-7c3c483d4ac7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:20:13 compute-0 nova_compute[189296]: 2025-11-28 18:20:13.647 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:13 compute-0 nova_compute[189296]: 2025-11-28 18:20:13.854 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:14 compute-0 nova_compute[189296]: 2025-11-28 18:20:14.468 189300 DEBUG nova.network.neutron [req-240a3adb-04fe-4cc4-9e03-38d1f39079de req-1c98f68b-2dc7-4185-9938-b6f08c1e55d0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Updated VIF entry in instance network info cache for port e58535aa-0624-4101-bd81-7c3c483d4ac7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:20:14 compute-0 nova_compute[189296]: 2025-11-28 18:20:14.469 189300 DEBUG nova.network.neutron [req-240a3adb-04fe-4cc4-9e03-38d1f39079de req-1c98f68b-2dc7-4185-9938-b6f08c1e55d0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Updating instance_info_cache with network_info: [{"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:20:14 compute-0 nova_compute[189296]: 2025-11-28 18:20:14.499 189300 DEBUG oslo_concurrency.lockutils [req-240a3adb-04fe-4cc4-9e03-38d1f39079de req-1c98f68b-2dc7-4185-9938-b6f08c1e55d0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:20:14 compute-0 podman[250207]: 2025-11-28 18:20:14.811534657 +0000 UTC m=+0.112445003 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Nov 28 18:20:14 compute-0 podman[250208]: 2025-11-28 18:20:14.81650532 +0000 UTC m=+0.123443105 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 18:20:14 compute-0 podman[250209]: 2025-11-28 18:20:14.827555463 +0000 UTC m=+0.123261831 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 28 18:20:15 compute-0 nova_compute[189296]: 2025-11-28 18:20:15.623 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:20:17 compute-0 nova_compute[189296]: 2025-11-28 18:20:17.202 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:17 compute-0 nova_compute[189296]: 2025-11-28 18:20:17.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:20:17 compute-0 nova_compute[189296]: 2025-11-28 18:20:17.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:20:17 compute-0 nova_compute[189296]: 2025-11-28 18:20:17.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:20:17 compute-0 nova_compute[189296]: 2025-11-28 18:20:17.967 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:20:17 compute-0 nova_compute[189296]: 2025-11-28 18:20:17.968 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:20:17 compute-0 nova_compute[189296]: 2025-11-28 18:20:17.968 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:20:17 compute-0 nova_compute[189296]: 2025-11-28 18:20:17.969 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0af9c8e6-8030-462a-9dfd-d52f041685f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:20:18 compute-0 nova_compute[189296]: 2025-11-28 18:20:18.651 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:18 compute-0 nova_compute[189296]: 2025-11-28 18:20:18.857 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:20 compute-0 nova_compute[189296]: 2025-11-28 18:20:20.751 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Updating instance_info_cache with network_info: [{"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:20:20 compute-0 nova_compute[189296]: 2025-11-28 18:20:20.769 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-0af9c8e6-8030-462a-9dfd-d52f041685f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:20:20 compute-0 nova_compute[189296]: 2025-11-28 18:20:20.769 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:20:20 compute-0 nova_compute[189296]: 2025-11-28 18:20:20.770 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:20:20 compute-0 nova_compute[189296]: 2025-11-28 18:20:20.771 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:20:20 compute-0 nova_compute[189296]: 2025-11-28 18:20:20.771 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:20:20 compute-0 nova_compute[189296]: 2025-11-28 18:20:20.771 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:20:21 compute-0 podman[250263]: 2025-11-28 18:20:21.029212897 +0000 UTC m=+0.081997643 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 18:20:21 compute-0 podman[250276]: 2025-11-28 18:20:21.049419515 +0000 UTC m=+0.084410062 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 28 18:20:21 compute-0 podman[250265]: 2025-11-28 18:20:21.054902401 +0000 UTC m=+0.099540946 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_id=edpm, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Nov 28 18:20:21 compute-0 podman[250264]: 2025-11-28 18:20:21.060042378 +0000 UTC m=+0.108616790 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 28 18:20:22 compute-0 nova_compute[189296]: 2025-11-28 18:20:22.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.014 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.652 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.667 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.668 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.668 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.669 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.779 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.844 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.845 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.861 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.906 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.917 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.983 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:23 compute-0 nova_compute[189296]: 2025-11-28 18:20:23.984 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.044 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.052 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.113 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.114 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.175 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.595 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.598 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4906MB free_disk=72.2829818725586GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.599 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.600 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.705 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.706 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.706 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.707 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.707 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.811 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.839 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.870 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:20:24 compute-0 nova_compute[189296]: 2025-11-28 18:20:24.871 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:25 compute-0 podman[250357]: 2025-11-28 18:20:25.078840769 +0000 UTC m=+0.140338362 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 28 18:20:26 compute-0 nova_compute[189296]: 2025-11-28 18:20:26.241 189300 DEBUG oslo_concurrency.lockutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:26 compute-0 nova_compute[189296]: 2025-11-28 18:20:26.242 189300 DEBUG oslo_concurrency.lockutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:26 compute-0 nova_compute[189296]: 2025-11-28 18:20:26.242 189300 INFO nova.compute.manager [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Rebooting instance#033[00m
Nov 28 18:20:26 compute-0 nova_compute[189296]: 2025-11-28 18:20:26.260 189300 DEBUG oslo_concurrency.lockutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:20:26 compute-0 nova_compute[189296]: 2025-11-28 18:20:26.260 189300 DEBUG oslo_concurrency.lockutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquired lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:20:26 compute-0 nova_compute[189296]: 2025-11-28 18:20:26.261 189300 DEBUG nova.network.neutron [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:20:27 compute-0 nova_compute[189296]: 2025-11-28 18:20:27.871 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:20:27 compute-0 nova_compute[189296]: 2025-11-28 18:20:27.871 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:20:28 compute-0 nova_compute[189296]: 2025-11-28 18:20:28.655 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:28 compute-0 nova_compute[189296]: 2025-11-28 18:20:28.865 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:29 compute-0 podman[203494]: time="2025-11-28T18:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:20:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30755 "" "Go-http-client/1.1"
Nov 28 18:20:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5245 "" "Go-http-client/1.1"
Nov 28 18:20:30 compute-0 nova_compute[189296]: 2025-11-28 18:20:30.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:20:31 compute-0 openstack_network_exporter[205632]: ERROR   18:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:20:31 compute-0 openstack_network_exporter[205632]: ERROR   18:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:20:31 compute-0 openstack_network_exporter[205632]: ERROR   18:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:20:31 compute-0 openstack_network_exporter[205632]: ERROR   18:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:20:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:20:31 compute-0 openstack_network_exporter[205632]: ERROR   18:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:20:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:20:33 compute-0 nova_compute[189296]: 2025-11-28 18:20:33.517 189300 DEBUG nova.network.neutron [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Updating instance_info_cache with network_info: [{"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:20:33 compute-0 nova_compute[189296]: 2025-11-28 18:20:33.586 189300 DEBUG oslo_concurrency.lockutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Releasing lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:20:33 compute-0 nova_compute[189296]: 2025-11-28 18:20:33.587 189300 DEBUG nova.compute.manager [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:20:33 compute-0 nova_compute[189296]: 2025-11-28 18:20:33.657 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:33 compute-0 kernel: tap9dd54f15-04 (unregistering): left promiscuous mode
Nov 28 18:20:33 compute-0 NetworkManager[56307]: <info>  [1764354033.7388] device (tap9dd54f15-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:20:33 compute-0 nova_compute[189296]: 2025-11-28 18:20:33.757 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:33 compute-0 ovn_controller[97771]: 2025-11-28T18:20:33Z|00134|binding|INFO|Releasing lport 9dd54f15-0412-4387-bc8f-07d1b4702dbb from this chassis (sb_readonly=0)
Nov 28 18:20:33 compute-0 ovn_controller[97771]: 2025-11-28T18:20:33Z|00135|binding|INFO|Setting lport 9dd54f15-0412-4387-bc8f-07d1b4702dbb down in Southbound
Nov 28 18:20:33 compute-0 ovn_controller[97771]: 2025-11-28T18:20:33Z|00136|binding|INFO|Removing iface tap9dd54f15-04 ovn-installed in OVS
Nov 28 18:20:33 compute-0 nova_compute[189296]: 2025-11-28 18:20:33.763 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:33 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:33.772 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:e5:da 10.100.0.8'], port_security=['fa:16:3e:ad:e5:da 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38dd3ba8-0751-41a0-b83f-b49dc0b192c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ebd016d88464c67abefec4da518674a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '54c85ea7-0279-4254-b89c-237ccce3cf9e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e84ddcd7-545a-4e48-a6ce-b80b286b2303, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=9dd54f15-0412-4387-bc8f-07d1b4702dbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:20:33 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:33.773 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 9dd54f15-0412-4387-bc8f-07d1b4702dbb in datapath cecb017f-4e6e-4722-8798-5d73232e6fbd unbound from our chassis#033[00m
Nov 28 18:20:33 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:33.776 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cecb017f-4e6e-4722-8798-5d73232e6fbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:20:33 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:33.778 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[940e232e-c5b5-48c7-afb9-9992efaea906]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:33 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:33.778 106624 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd namespace which is not needed anymore#033[00m
Nov 28 18:20:33 compute-0 nova_compute[189296]: 2025-11-28 18:20:33.783 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:33 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 28 18:20:33 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 41.502s CPU time.
Nov 28 18:20:33 compute-0 systemd-machined[155703]: Machine qemu-12-instance-0000000c terminated.
Nov 28 18:20:33 compute-0 podman[250384]: 2025-11-28 18:20:33.835614737 +0000 UTC m=+0.078708102 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:20:33 compute-0 nova_compute[189296]: 2025-11-28 18:20:33.868 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:33 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[249775]: [NOTICE]   (249779) : haproxy version is 2.8.14-c23fe91
Nov 28 18:20:33 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[249775]: [NOTICE]   (249779) : path to executable is /usr/sbin/haproxy
Nov 28 18:20:33 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[249775]: [WARNING]  (249779) : Exiting Master process...
Nov 28 18:20:33 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[249775]: [ALERT]    (249779) : Current worker (249781) exited with code 143 (Terminated)
Nov 28 18:20:33 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[249775]: [WARNING]  (249779) : All workers exited. Exiting... (0)
Nov 28 18:20:33 compute-0 systemd[1]: libpod-740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54.scope: Deactivated successfully.
Nov 28 18:20:33 compute-0 podman[250429]: 2025-11-28 18:20:33.957264818 +0000 UTC m=+0.067952887 container died 740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:20:33 compute-0 nova_compute[189296]: 2025-11-28 18:20:33.965 189300 INFO nova.virt.libvirt.driver [-] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Instance destroyed successfully.#033[00m
Nov 28 18:20:33 compute-0 nova_compute[189296]: 2025-11-28 18:20:33.965 189300 DEBUG nova.objects.instance [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lazy-loading 'resources' on Instance uuid 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54-userdata-shm.mount: Deactivated successfully.
Nov 28 18:20:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e00d03a79495d12e13cf428fdecc084acaa7ed8858ecfe91ac8e4fcd68668ad6-merged.mount: Deactivated successfully.
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.002 189300 DEBUG nova.virt.libvirt.vif [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-120148377',display_name='tempest-ServerActionsTestJSON-server-120148377',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-120148377',id=12,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNKDhkiMtsztQmvM2gRYqVRTHcsj/9P9Cg/+MCIxNFg5QbGBxNz8mS/LylMSt0qq29jzqRfKycq5Qi4LzakhV4vYbtYARzjXolBVflKv2a5LVTztOBqSNR1wZxrvf10hw==',key_name='tempest-keypair-957693611',keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:19:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ebd016d88464c67abefec4da518674a',ramdisk_id='',reservation_id='r-jl0w8ww4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1827601863',owner_user_name='tempest-ServerActionsTestJSON-1827601863-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:20:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='44a8645b16fc4d99820df9d0c6154195',uuid=38dd3ba8-0751-41a0-b83f-b49dc0b192c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.002 189300 DEBUG nova.network.os_vif_util [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converting VIF {"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.003 189300 DEBUG nova.network.os_vif_util [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.003 189300 DEBUG os_vif [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.004 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.005 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9dd54f15-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:34 compute-0 podman[250429]: 2025-11-28 18:20:34.008460801 +0000 UTC m=+0.119148870 container cleanup 740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.008 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.010 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.012 189300 INFO os_vif [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04')#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.022 189300 DEBUG nova.virt.libvirt.driver [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Start _get_guest_xml network_info=[{"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:20:34 compute-0 systemd[1]: libpod-conmon-740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54.scope: Deactivated successfully.
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.031 189300 DEBUG nova.compute.manager [req-22ca993d-ed55-4a75-904a-69c35558535f req-921f0a8f-b3b5-41ef-872d-8d35a2fc4dfa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-vif-unplugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.032 189300 DEBUG oslo_concurrency.lockutils [req-22ca993d-ed55-4a75-904a-69c35558535f req-921f0a8f-b3b5-41ef-872d-8d35a2fc4dfa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.032 189300 DEBUG oslo_concurrency.lockutils [req-22ca993d-ed55-4a75-904a-69c35558535f req-921f0a8f-b3b5-41ef-872d-8d35a2fc4dfa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.032 189300 DEBUG oslo_concurrency.lockutils [req-22ca993d-ed55-4a75-904a-69c35558535f req-921f0a8f-b3b5-41ef-872d-8d35a2fc4dfa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.032 189300 DEBUG nova.compute.manager [req-22ca993d-ed55-4a75-904a-69c35558535f req-921f0a8f-b3b5-41ef-872d-8d35a2fc4dfa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] No waiting events found dispatching network-vif-unplugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.033 189300 WARNING nova.compute.manager [req-22ca993d-ed55-4a75-904a-69c35558535f req-921f0a8f-b3b5-41ef-872d-8d35a2fc4dfa 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received unexpected event network-vif-unplugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.034 189300 WARNING nova.virt.libvirt.driver [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.043 189300 DEBUG nova.virt.libvirt.host [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.044 189300 DEBUG nova.virt.libvirt.host [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.048 189300 DEBUG nova.virt.libvirt.host [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.048 189300 DEBUG nova.virt.libvirt.host [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.048 189300 DEBUG nova.virt.libvirt.driver [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.049 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.049 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.049 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.049 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.050 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.050 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.050 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.050 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.050 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.050 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.051 189300 DEBUG nova.virt.hardware [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.051 189300 DEBUG nova.objects.instance [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.091 189300 DEBUG oslo_concurrency.processutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:34 compute-0 podman[250474]: 2025-11-28 18:20:34.100038539 +0000 UTC m=+0.054597098 container remove 740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.108 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[a79dd6ea-cd53-444f-ab2c-4abb18e2e298]: (4, ('Fri Nov 28 06:20:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd (740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54)\n740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54\nFri Nov 28 06:20:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd (740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54)\n740fa0af16268967b0e366ba1fca6ea2a8dd0d8e7eb4d63f04e18299969ded54\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.110 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[49619d6f-0581-49ed-903d-1525aa2d8341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.111 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcecb017f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.114 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 kernel: tapcecb017f-40: left promiscuous mode
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.118 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.131 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[b88fe2cd-7330-45ab-b749-4224a8786716]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.133 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.152 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[895ae1d6-850e-448e-8d5a-af6d2c3073d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.153 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[c55f0889-8336-4e6b-93ee-c6400ebbb21a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.170 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[893bd00e-a0cc-4391-848e-fcab33f79263]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510044, 'reachable_time': 18383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250490, 'error': None, 'target': 'ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.171 189300 DEBUG oslo_concurrency.processutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.config --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.171 189300 DEBUG oslo_concurrency.lockutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.172 189300 DEBUG oslo_concurrency.lockutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.172 189300 DEBUG oslo_concurrency.lockutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.173 106734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.173 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[ecccde34-8e1a-45dc-881b-62e27de094de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 systemd[1]: run-netns-ovnmeta\x2dcecb017f\x2d4e6e\x2d4722\x2d8798\x2d5d73232e6fbd.mount: Deactivated successfully.
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.176 189300 DEBUG nova.virt.libvirt.vif [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-120148377',display_name='tempest-ServerActionsTestJSON-server-120148377',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-120148377',id=12,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNKDhkiMtsztQmvM2gRYqVRTHcsj/9P9Cg/+MCIxNFg5QbGBxNz8mS/LylMSt0qq29jzqRfKycq5Qi4LzakhV4vYbtYARzjXolBVflKv2a5LVTztOBqSNR1wZxrvf10hw==',key_name='tempest-keypair-957693611',keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:19:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ebd016d88464c67abefec4da518674a',ramdisk_id='',reservation_id='r-jl0w8ww4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1827601863',owner_user_name='tempest-ServerActionsTestJSON-1827601863-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:20:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='44a8645b16fc4d99820df9d0c6154195',uuid=38dd3ba8-0751-41a0-b83f-b49dc0b192c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.177 189300 DEBUG nova.network.os_vif_util [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converting VIF {"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.177 189300 DEBUG nova.network.os_vif_util [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.178 189300 DEBUG nova.objects.instance [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lazy-loading 'pci_devices' on Instance uuid 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.202 189300 DEBUG nova.virt.libvirt.driver [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <uuid>38dd3ba8-0751-41a0-b83f-b49dc0b192c6</uuid>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <name>instance-0000000c</name>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <nova:name>tempest-ServerActionsTestJSON-server-120148377</nova:name>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:20:34</nova:creationTime>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:20:34 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:20:34 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:20:34 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:20:34 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:20:34 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:20:34 compute-0 nova_compute[189296]:        <nova:user uuid="44a8645b16fc4d99820df9d0c6154195">tempest-ServerActionsTestJSON-1827601863-project-member</nova:user>
Nov 28 18:20:34 compute-0 nova_compute[189296]:        <nova:project uuid="6ebd016d88464c67abefec4da518674a">tempest-ServerActionsTestJSON-1827601863</nova:project>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="ffec9e61-65fb-46ae-8d34-338639229ec3"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:20:34 compute-0 nova_compute[189296]:        <nova:port uuid="9dd54f15-0412-4387-bc8f-07d1b4702dbb">
Nov 28 18:20:34 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <system>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <entry name="serial">38dd3ba8-0751-41a0-b83f-b49dc0b192c6</entry>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <entry name="uuid">38dd3ba8-0751-41a0-b83f-b49dc0b192c6</entry>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    </system>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <os>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  </os>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <features>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  </features>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.config"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:ad:e5:da"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <target dev="tap9dd54f15-04"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/console.log" append="off"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <video>
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    </video>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <input type="keyboard" bus="usb"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:20:34 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:20:34 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:20:34 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:20:34 compute-0 nova_compute[189296]: </domain>
Nov 28 18:20:34 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.203 189300 DEBUG oslo_concurrency.processutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.261 189300 DEBUG oslo_concurrency.processutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.262 189300 DEBUG oslo_concurrency.processutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.327 189300 DEBUG oslo_concurrency.processutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.328 189300 DEBUG nova.objects.instance [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.354 189300 DEBUG oslo_concurrency.processutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.436 189300 DEBUG oslo_concurrency.processutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.436 189300 DEBUG nova.virt.disk.api [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Checking if we can resize image /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.437 189300 DEBUG oslo_concurrency.processutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.505 189300 DEBUG oslo_concurrency.processutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.507 189300 DEBUG nova.virt.disk.api [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Cannot resize image /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.508 189300 DEBUG nova.objects.instance [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lazy-loading 'migration_context' on Instance uuid 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.533 189300 DEBUG nova.virt.libvirt.vif [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-120148377',display_name='tempest-ServerActionsTestJSON-server-120148377',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-120148377',id=12,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNKDhkiMtsztQmvM2gRYqVRTHcsj/9P9Cg/+MCIxNFg5QbGBxNz8mS/LylMSt0qq29jzqRfKycq5Qi4LzakhV4vYbtYARzjXolBVflKv2a5LVTztOBqSNR1wZxrvf10hw==',key_name='tempest-keypair-957693611',keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:19:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='6ebd016d88464c67abefec4da518674a',ramdisk_id='',reservation_id='r-jl0w8ww4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1827601863',owner_user_name='tempest-ServerActionsTestJSON-1827601863-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:20:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='44a8645b16fc4d99820df9d0c6154195',uuid=38dd3ba8-0751-41a0-b83f-b49dc0b192c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.534 189300 DEBUG nova.network.os_vif_util [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converting VIF {"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.535 189300 DEBUG nova.network.os_vif_util [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.536 189300 DEBUG os_vif [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.536 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.537 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.538 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.542 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.542 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9dd54f15-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.543 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9dd54f15-04, col_values=(('external_ids', {'iface-id': '9dd54f15-0412-4387-bc8f-07d1b4702dbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:e5:da', 'vm-uuid': '38dd3ba8-0751-41a0-b83f-b49dc0b192c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.545 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 NetworkManager[56307]: <info>  [1764354034.5466] manager: (tap9dd54f15-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.547 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.551 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.552 189300 INFO os_vif [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04')#033[00m
Nov 28 18:20:34 compute-0 kernel: tap9dd54f15-04: entered promiscuous mode
Nov 28 18:20:34 compute-0 systemd-udevd[250396]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:20:34 compute-0 NetworkManager[56307]: <info>  [1764354034.6243] manager: (tap9dd54f15-04): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 28 18:20:34 compute-0 ovn_controller[97771]: 2025-11-28T18:20:34Z|00137|binding|INFO|Claiming lport 9dd54f15-0412-4387-bc8f-07d1b4702dbb for this chassis.
Nov 28 18:20:34 compute-0 ovn_controller[97771]: 2025-11-28T18:20:34Z|00138|binding|INFO|9dd54f15-0412-4387-bc8f-07d1b4702dbb: Claiming fa:16:3e:ad:e5:da 10.100.0.8
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.630 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 NetworkManager[56307]: <info>  [1764354034.6363] device (tap9dd54f15-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:20:34 compute-0 NetworkManager[56307]: <info>  [1764354034.6410] device (tap9dd54f15-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.641 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:e5:da 10.100.0.8'], port_security=['fa:16:3e:ad:e5:da 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38dd3ba8-0751-41a0-b83f-b49dc0b192c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ebd016d88464c67abefec4da518674a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '54c85ea7-0279-4254-b89c-237ccce3cf9e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e84ddcd7-545a-4e48-a6ce-b80b286b2303, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=9dd54f15-0412-4387-bc8f-07d1b4702dbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.642 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 9dd54f15-0412-4387-bc8f-07d1b4702dbb in datapath cecb017f-4e6e-4722-8798-5d73232e6fbd bound to our chassis#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.644 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cecb017f-4e6e-4722-8798-5d73232e6fbd#033[00m
Nov 28 18:20:34 compute-0 ovn_controller[97771]: 2025-11-28T18:20:34Z|00139|binding|INFO|Setting lport 9dd54f15-0412-4387-bc8f-07d1b4702dbb ovn-installed in OVS
Nov 28 18:20:34 compute-0 ovn_controller[97771]: 2025-11-28T18:20:34Z|00140|binding|INFO|Setting lport 9dd54f15-0412-4387-bc8f-07d1b4702dbb up in Southbound
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.649 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.656 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[8e141b91-dfaf-4327-b6ab-22a40eb03a9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.657 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcecb017f-41 in ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.659 238909 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcecb017f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.659 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[a2aa2a94-99e0-4f7b-86e8-416c916c388c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.662 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[63b7495d-1fae-4e47-84d2-225bd10fa2cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.662 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.674 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[8dddc110-0759-4404-9e4b-520485675367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 systemd-machined[155703]: New machine qemu-14-instance-0000000c.
Nov 28 18:20:34 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000c.
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.703 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d7002af7-dafe-43cb-87b8-3ffeeee86ea7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.732 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[fe409a87-5a1e-401c-b15a-de8d6cd9e14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 NetworkManager[56307]: <info>  [1764354034.7408] manager: (tapcecb017f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.738 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd1ac2f-bce6-4069-9487-8451c30e6ac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.779 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[84c7a9ae-ec40-4f0c-a1db-1ee71102a09c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.782 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[27e99d1b-8868-4e0c-b9e6-bf0717971e94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 NetworkManager[56307]: <info>  [1764354034.8031] device (tapcecb017f-40): carrier: link connected
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.809 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[57b9b9ad-27f6-4eec-9bf0-a6170288ced0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.826 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[14931676-2078-40c6-a785-b4cafe26891a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcecb017f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:ab:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518039, 'reachable_time': 16119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250551, 'error': None, 'target': 'ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.840 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[bbab60b0-4132-45ec-b14d-63f2ac09e581]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:ab55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 518039, 'tstamp': 518039}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250552, 'error': None, 'target': 'ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.857 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[335ad60f-d94b-4eaf-92b9-b3b54fffe6c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcecb017f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:ab:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518039, 'reachable_time': 16119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250553, 'error': None, 'target': 'ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.891 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd90ae0-f6bc-4466-bcba-7bbaf4ea7204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.953 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9e3d45-4f54-4660-b1e7-f885d9a7ecf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.954 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcecb017f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.954 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.955 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcecb017f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:34 compute-0 kernel: tapcecb017f-40: entered promiscuous mode
Nov 28 18:20:34 compute-0 NetworkManager[56307]: <info>  [1764354034.9580] manager: (tapcecb017f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.957 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.965 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcecb017f-40, col_values=(('external_ids', {'iface-id': '9f681880-a374-4938-a7d7-30fad6716ed2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.966 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 ovn_controller[97771]: 2025-11-28T18:20:34Z|00141|binding|INFO|Releasing lport 9f681880-a374-4938-a7d7-30fad6716ed2 from this chassis (sb_readonly=0)
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.967 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.970 106624 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cecb017f-4e6e-4722-8798-5d73232e6fbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cecb017f-4e6e-4722-8798-5d73232e6fbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.971 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[bb69df5d-d75e-4124-8fcf-212cedea1b6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.971 106624 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: global
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    log         /dev/log local0 debug
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    log-tag     haproxy-metadata-proxy-cecb017f-4e6e-4722-8798-5d73232e6fbd
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    user        root
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    group       root
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    maxconn     1024
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    pidfile     /var/lib/neutron/external/pids/cecb017f-4e6e-4722-8798-5d73232e6fbd.pid.haproxy
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    daemon
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: defaults
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    log global
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    mode http
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    option httplog
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    option dontlognull
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    option http-server-close
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    option forwardfor
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    retries                 3
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    timeout http-request    30s
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    timeout connect         30s
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    timeout client          32s
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    timeout server          32s
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    timeout http-keep-alive 30s
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: listen listener
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    bind 169.254.169.254:80
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    server metadata /var/lib/neutron/metadata_proxy
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]:    http-request add-header X-OVN-Network-ID cecb017f-4e6e-4722-8798-5d73232e6fbd
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 28 18:20:34 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:34.972 106624 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'env', 'PROCESS_TAG=haproxy-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cecb017f-4e6e-4722-8798-5d73232e6fbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 28 18:20:34 compute-0 nova_compute[189296]: 2025-11-28 18:20:34.989 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:35 compute-0 ovn_controller[97771]: 2025-11-28T18:20:35Z|00142|binding|INFO|Releasing lport fadccca5-e309-4390-a64b-6711ee103450 from this chassis (sb_readonly=0)
Nov 28 18:20:35 compute-0 ovn_controller[97771]: 2025-11-28T18:20:35Z|00143|binding|INFO|Releasing lport 9f681880-a374-4938-a7d7-30fad6716ed2 from this chassis (sb_readonly=0)
Nov 28 18:20:35 compute-0 podman[250585]: 2025-11-28 18:20:35.384301892 +0000 UTC m=+0.077819151 container create 24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 28 18:20:35 compute-0 systemd[1]: Started libpod-conmon-24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3.scope.
Nov 28 18:20:35 compute-0 podman[250585]: 2025-11-28 18:20:35.333978041 +0000 UTC m=+0.027495330 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.432 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:35 compute-0 systemd[1]: Started libcrun container.
Nov 28 18:20:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfc6d14e26c1377bcea9c4038e667b7a7859f4a781cbacfefef35dd22f2737dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 18:20:35 compute-0 podman[250585]: 2025-11-28 18:20:35.465696239 +0000 UTC m=+0.159213528 container init 24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 28 18:20:35 compute-0 podman[250585]: 2025-11-28 18:20:35.475311596 +0000 UTC m=+0.168828855 container start 24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:20:35 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[250602]: [NOTICE]   (250610) : New worker (250613) forked
Nov 28 18:20:35 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[250602]: [NOTICE]   (250610) : Loading success.
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.533 189300 DEBUG nova.virt.libvirt.host [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Removed pending event for 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.534 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354035.5333762, 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.534 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.536 189300 DEBUG nova.compute.manager [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.542 189300 INFO nova.virt.libvirt.driver [-] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Instance rebooted successfully.#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.543 189300 DEBUG nova.compute.manager [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.556 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.561 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.607 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.608 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354035.5392644, 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.608 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] VM Started (Lifecycle Event)#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.635 189300 DEBUG oslo_concurrency.lockutils [None req-cda681b1-f0e2-48ba-a504-a810f13615ee 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 9.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.637 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:20:35 compute-0 nova_compute[189296]: 2025-11-28 18:20:35.650 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.143 189300 DEBUG nova.compute.manager [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.143 189300 DEBUG oslo_concurrency.lockutils [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.143 189300 DEBUG oslo_concurrency.lockutils [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.144 189300 DEBUG oslo_concurrency.lockutils [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.144 189300 DEBUG nova.compute.manager [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] No waiting events found dispatching network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.144 189300 WARNING nova.compute.manager [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received unexpected event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb for instance with vm_state active and task_state None.#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.144 189300 DEBUG nova.compute.manager [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.144 189300 DEBUG oslo_concurrency.lockutils [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.145 189300 DEBUG oslo_concurrency.lockutils [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.145 189300 DEBUG oslo_concurrency.lockutils [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.145 189300 DEBUG nova.compute.manager [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] No waiting events found dispatching network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.145 189300 WARNING nova.compute.manager [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received unexpected event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb for instance with vm_state active and task_state None.#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.145 189300 DEBUG nova.compute.manager [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.146 189300 DEBUG oslo_concurrency.lockutils [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.146 189300 DEBUG oslo_concurrency.lockutils [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.146 189300 DEBUG oslo_concurrency.lockutils [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.146 189300 DEBUG nova.compute.manager [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] No waiting events found dispatching network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:20:36 compute-0 nova_compute[189296]: 2025-11-28 18:20:36.146 189300 WARNING nova.compute.manager [req-e41d2a71-65c2-4889-9466-732130bc9871 req-5c86794c-d3a3-4efb-ae64-660e8cbbc357 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received unexpected event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb for instance with vm_state active and task_state None.#033[00m
Nov 28 18:20:37 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:37.092 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:20:37 compute-0 nova_compute[189296]: 2025-11-28 18:20:37.092 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:37 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:37.093 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:20:38 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:38.095 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:38 compute-0 nova_compute[189296]: 2025-11-28 18:20:38.660 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:39 compute-0 nova_compute[189296]: 2025-11-28 18:20:39.546 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:40 compute-0 ovn_controller[97771]: 2025-11-28T18:20:40Z|00144|binding|INFO|Releasing lport fadccca5-e309-4390-a64b-6711ee103450 from this chassis (sb_readonly=0)
Nov 28 18:20:40 compute-0 ovn_controller[97771]: 2025-11-28T18:20:40Z|00145|binding|INFO|Releasing lport 9f681880-a374-4938-a7d7-30fad6716ed2 from this chassis (sb_readonly=0)
Nov 28 18:20:40 compute-0 nova_compute[189296]: 2025-11-28 18:20:40.703 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:41 compute-0 ovn_controller[97771]: 2025-11-28T18:20:41Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:39:25:e6 10.100.0.4
Nov 28 18:20:41 compute-0 ovn_controller[97771]: 2025-11-28T18:20:41Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:25:e6 10.100.0.4
Nov 28 18:20:43 compute-0 nova_compute[189296]: 2025-11-28 18:20:43.664 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:44 compute-0 nova_compute[189296]: 2025-11-28 18:20:44.547 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:45 compute-0 podman[250651]: 2025-11-28 18:20:45.019586206 +0000 UTC m=+0.075628256 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true)
Nov 28 18:20:45 compute-0 podman[250650]: 2025-11-28 18:20:45.041993148 +0000 UTC m=+0.099360701 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Nov 28 18:20:45 compute-0 podman[250652]: 2025-11-28 18:20:45.045316191 +0000 UTC m=+0.095758033 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.111 189300 INFO nova.compute.manager [None req-4e0499d2-81f3-46c4-a3e7-8b37089a8bb6 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Get console output#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.121 238742 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.490 189300 DEBUG oslo_concurrency.lockutils [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.491 189300 DEBUG oslo_concurrency.lockutils [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.492 189300 DEBUG oslo_concurrency.lockutils [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.493 189300 DEBUG oslo_concurrency.lockutils [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.494 189300 DEBUG oslo_concurrency.lockutils [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.496 189300 INFO nova.compute.manager [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Terminating instance#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.498 189300 DEBUG nova.compute.manager [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:20:47 compute-0 kernel: tape58535aa-06 (unregistering): left promiscuous mode
Nov 28 18:20:47 compute-0 NetworkManager[56307]: <info>  [1764354047.5389] device (tape58535aa-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:20:47 compute-0 ovn_controller[97771]: 2025-11-28T18:20:47Z|00146|binding|INFO|Releasing lport e58535aa-0624-4101-bd81-7c3c483d4ac7 from this chassis (sb_readonly=0)
Nov 28 18:20:47 compute-0 ovn_controller[97771]: 2025-11-28T18:20:47Z|00147|binding|INFO|Setting lport e58535aa-0624-4101-bd81-7c3c483d4ac7 down in Southbound
Nov 28 18:20:47 compute-0 ovn_controller[97771]: 2025-11-28T18:20:47Z|00148|binding|INFO|Removing iface tape58535aa-06 ovn-installed in OVS
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.547 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.562 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:25:e6 10.100.0.4'], port_security=['fa:16:3e:39:25:e6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5e570bcf-69d9-41f4-b621-d75ff7b1bd6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c41bbf2b30ca428fbd489c3dc29e8045', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bbd1a953-a99e-470b-b1ba-0c8ce7261629', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7149c56-1986-4c48-b442-f7c364e29e84, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=e58535aa-0624-4101-bd81-7c3c483d4ac7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.563 106624 INFO neutron.agent.ovn.metadata.agent [-] Port e58535aa-0624-4101-bd81-7c3c483d4ac7 in datapath 16e2cef3-e4a2-4570-962f-fcbf9f3d2577 unbound from our chassis#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.564 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16e2cef3-e4a2-4570-962f-fcbf9f3d2577#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.568 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.584 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[ee157123-4a6a-4813-859b-0a06788362a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 28 18:20:47 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 37.223s CPU time.
Nov 28 18:20:47 compute-0 systemd-machined[155703]: Machine qemu-13-instance-0000000d terminated.
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.621 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[d74e4042-9c0c-4034-921f-fbf14eb88ee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.624 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[da639ebb-906e-49d9-bcd5-f732c4589235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.655 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5ca810-41d2-42ed-82cc-573d7f113c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.671 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[9c27b208-a795-4124-a1a9-77161ae79aa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16e2cef3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:52:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508663, 'reachable_time': 28194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250719, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.688 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[13d85493-2b50-4a42-9572-01c777640a92]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap16e2cef3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508674, 'tstamp': 508674}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250720, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16e2cef3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508677, 'tstamp': 508677}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250720, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.690 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16e2cef3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.692 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.698 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.699 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16e2cef3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.700 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.700 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16e2cef3-e0, col_values=(('external_ids', {'iface-id': 'fadccca5-e309-4390-a64b-6711ee103450'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.701 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:47 compute-0 kernel: tape58535aa-06: entered promiscuous mode
Nov 28 18:20:47 compute-0 systemd-udevd[250712]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:20:47 compute-0 kernel: tape58535aa-06 (unregistering): left promiscuous mode
Nov 28 18:20:47 compute-0 ovn_controller[97771]: 2025-11-28T18:20:47Z|00149|binding|INFO|Claiming lport e58535aa-0624-4101-bd81-7c3c483d4ac7 for this chassis.
Nov 28 18:20:47 compute-0 ovn_controller[97771]: 2025-11-28T18:20:47Z|00150|binding|INFO|e58535aa-0624-4101-bd81-7c3c483d4ac7: Claiming fa:16:3e:39:25:e6 10.100.0.4
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.723 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 ovn_controller[97771]: 2025-11-28T18:20:47Z|00151|binding|INFO|Setting lport e58535aa-0624-4101-bd81-7c3c483d4ac7 ovn-installed in OVS
Nov 28 18:20:47 compute-0 ovn_controller[97771]: 2025-11-28T18:20:47Z|00152|if_status|INFO|Not setting lport e58535aa-0624-4101-bd81-7c3c483d4ac7 down as sb is readonly
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.750 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.783 189300 INFO nova.virt.libvirt.driver [-] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Instance destroyed successfully.#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.784 189300 DEBUG nova.objects.instance [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lazy-loading 'resources' on Instance uuid 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:20:47 compute-0 ovn_controller[97771]: 2025-11-28T18:20:47Z|00153|binding|INFO|Releasing lport e58535aa-0624-4101-bd81-7c3c483d4ac7 from this chassis (sb_readonly=0)
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.863 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:25:e6 10.100.0.4'], port_security=['fa:16:3e:39:25:e6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5e570bcf-69d9-41f4-b621-d75ff7b1bd6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c41bbf2b30ca428fbd489c3dc29e8045', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bbd1a953-a99e-470b-b1ba-0c8ce7261629', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7149c56-1986-4c48-b442-f7c364e29e84, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=e58535aa-0624-4101-bd81-7c3c483d4ac7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.865 106624 INFO neutron.agent.ovn.metadata.agent [-] Port e58535aa-0624-4101-bd81-7c3c483d4ac7 in datapath 16e2cef3-e4a2-4570-962f-fcbf9f3d2577 bound to our chassis#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.866 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16e2cef3-e4a2-4570-962f-fcbf9f3d2577#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.871 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:25:e6 10.100.0.4'], port_security=['fa:16:3e:39:25:e6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5e570bcf-69d9-41f4-b621-d75ff7b1bd6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c41bbf2b30ca428fbd489c3dc29e8045', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bbd1a953-a99e-470b-b1ba-0c8ce7261629', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7149c56-1986-4c48-b442-f7c364e29e84, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=e58535aa-0624-4101-bd81-7c3c483d4ac7) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.871 189300 DEBUG nova.virt.libvirt.vif [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:19:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1212840995',display_name='tempest-TestNetworkBasicOps-server-1212840995',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1212840995',id=13,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE+8SFa3xQenW7OAm80PYbdQR5vzTm/9Wx8vyjhFMikx/tqkpCAIM9M1XwKxUttxXJbVjGWQJZ3bUpSJJtqa5la3F2ivvclV6oghFm55fNXyqDmtzHesal/acrtB1Knsrw==',key_name='tempest-TestNetworkBasicOps-1843861968',keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:20:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c41bbf2b30ca428fbd489c3dc29e8045',ramdisk_id='',reservation_id='r-f5ps81n7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-543144913',owner_user_name='tempest-TestNetworkBasicOps-543144913-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:20:06Z,user_data=None,user_id='0052e0d91c7e4c98bd11644a4dca818a',uuid=5e570bcf-69d9-41f4-b621-d75ff7b1bd6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.873 189300 DEBUG nova.network.os_vif_util [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converting VIF {"id": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "address": "fa:16:3e:39:25:e6", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape58535aa-06", "ovs_interfaceid": "e58535aa-0624-4101-bd81-7c3c483d4ac7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.875 189300 DEBUG nova.network.os_vif_util [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:25:e6,bridge_name='br-int',has_traffic_filtering=True,id=e58535aa-0624-4101-bd81-7c3c483d4ac7,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape58535aa-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.876 189300 DEBUG os_vif [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:25:e6,bridge_name='br-int',has_traffic_filtering=True,id=e58535aa-0624-4101-bd81-7c3c483d4ac7,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape58535aa-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.878 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.879 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape58535aa-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.880 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.881 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.884 189300 INFO os_vif [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:25:e6,bridge_name='br-int',has_traffic_filtering=True,id=e58535aa-0624-4101-bd81-7c3c483d4ac7,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape58535aa-06')#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.885 189300 INFO nova.virt.libvirt.driver [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Deleting instance files /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c_del#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.885 189300 INFO nova.virt.libvirt.driver [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Deletion of /var/lib/nova/instances/5e570bcf-69d9-41f4-b621-d75ff7b1bd6c_del complete#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.889 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[2e620457-e7ca-416d-b53b-554b1bda7623]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.913 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[687a45cc-2481-4a2d-b604-24aeee9e1480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.917 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd83e5a-1f1e-408e-8921-06e6c430b0a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.934 189300 INFO nova.compute.manager [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.935 189300 DEBUG oslo.service.loopingcall [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.936 189300 DEBUG nova.compute.manager [-] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.936 189300 DEBUG nova.network.neutron [-] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.942 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[288ea78a-0044-49f1-9c47-9a0ac05fdfbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.962 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[0359ae19-7588-4d36-b5ae-6ec34df3677d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16e2cef3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:52:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508663, 'reachable_time': 28194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250737, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.976 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[a85d4360-0c8b-4541-bce2-240f1586210f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap16e2cef3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508674, 'tstamp': 508674}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250738, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16e2cef3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508677, 'tstamp': 508677}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250738, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.978 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16e2cef3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.980 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 nova_compute[189296]: 2025-11-28 18:20:47.981 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.982 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16e2cef3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.982 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.983 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16e2cef3-e0, col_values=(('external_ids', {'iface-id': 'fadccca5-e309-4390-a64b-6711ee103450'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.983 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.984 106624 INFO neutron.agent.ovn.metadata.agent [-] Port e58535aa-0624-4101-bd81-7c3c483d4ac7 in datapath 16e2cef3-e4a2-4570-962f-fcbf9f3d2577 unbound from our chassis#033[00m
Nov 28 18:20:47 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:47.985 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16e2cef3-e4a2-4570-962f-fcbf9f3d2577#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.019 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[def418e3-041c-4823-be2b-15cdecad4ebb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.058 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[bb56c35a-d9d3-4726-bf12-f7e5023dc1c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.063 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[0081be85-1a8b-43e5-98e0-1e973618c5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.113 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca25a0b-51c9-4dd9-832e-af557c4888ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.135 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[1f194b68-e29b-4b29-b9a3-b37a1c9ca8b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16e2cef3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:52:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508663, 'reachable_time': 28194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250744, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.153 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[42499db1-3b0d-4e3c-8b22-bd2c1f53b134]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap16e2cef3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508674, 'tstamp': 508674}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250745, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16e2cef3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508677, 'tstamp': 508677}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250745, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.154 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16e2cef3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:48 compute-0 nova_compute[189296]: 2025-11-28 18:20:48.156 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.158 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16e2cef3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.158 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.159 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16e2cef3-e0, col_values=(('external_ids', {'iface-id': 'fadccca5-e309-4390-a64b-6711ee103450'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:48.159 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:20:48 compute-0 nova_compute[189296]: 2025-11-28 18:20:48.239 189300 DEBUG nova.compute.manager [req-a62f1dba-b1fa-4f67-a39a-af07a7a9f347 req-041257ee-9748-44c6-8400-337d21e40a6f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Received event network-vif-unplugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:48 compute-0 nova_compute[189296]: 2025-11-28 18:20:48.240 189300 DEBUG oslo_concurrency.lockutils [req-a62f1dba-b1fa-4f67-a39a-af07a7a9f347 req-041257ee-9748-44c6-8400-337d21e40a6f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:48 compute-0 nova_compute[189296]: 2025-11-28 18:20:48.240 189300 DEBUG oslo_concurrency.lockutils [req-a62f1dba-b1fa-4f67-a39a-af07a7a9f347 req-041257ee-9748-44c6-8400-337d21e40a6f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:48 compute-0 nova_compute[189296]: 2025-11-28 18:20:48.241 189300 DEBUG oslo_concurrency.lockutils [req-a62f1dba-b1fa-4f67-a39a-af07a7a9f347 req-041257ee-9748-44c6-8400-337d21e40a6f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:48 compute-0 nova_compute[189296]: 2025-11-28 18:20:48.241 189300 DEBUG nova.compute.manager [req-a62f1dba-b1fa-4f67-a39a-af07a7a9f347 req-041257ee-9748-44c6-8400-337d21e40a6f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] No waiting events found dispatching network-vif-unplugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:20:48 compute-0 nova_compute[189296]: 2025-11-28 18:20:48.241 189300 DEBUG nova.compute.manager [req-a62f1dba-b1fa-4f67-a39a-af07a7a9f347 req-041257ee-9748-44c6-8400-337d21e40a6f 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Received event network-vif-unplugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:20:48 compute-0 nova_compute[189296]: 2025-11-28 18:20:48.666 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.338 189300 DEBUG nova.compute.manager [req-4676dbc2-3b1c-43cf-9a8f-14c19b0183a7 req-4982ebfa-e699-421d-ba3a-8bd8f01ae514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Received event network-vif-plugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.339 189300 DEBUG oslo_concurrency.lockutils [req-4676dbc2-3b1c-43cf-9a8f-14c19b0183a7 req-4982ebfa-e699-421d-ba3a-8bd8f01ae514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.339 189300 DEBUG oslo_concurrency.lockutils [req-4676dbc2-3b1c-43cf-9a8f-14c19b0183a7 req-4982ebfa-e699-421d-ba3a-8bd8f01ae514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.339 189300 DEBUG oslo_concurrency.lockutils [req-4676dbc2-3b1c-43cf-9a8f-14c19b0183a7 req-4982ebfa-e699-421d-ba3a-8bd8f01ae514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.340 189300 DEBUG nova.compute.manager [req-4676dbc2-3b1c-43cf-9a8f-14c19b0183a7 req-4982ebfa-e699-421d-ba3a-8bd8f01ae514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] No waiting events found dispatching network-vif-plugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.340 189300 WARNING nova.compute.manager [req-4676dbc2-3b1c-43cf-9a8f-14c19b0183a7 req-4982ebfa-e699-421d-ba3a-8bd8f01ae514 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Received unexpected event network-vif-plugged-e58535aa-0624-4101-bd81-7c3c483d4ac7 for instance with vm_state active and task_state deleting.#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.566 189300 DEBUG nova.network.neutron [-] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.590 189300 INFO nova.compute.manager [-] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Took 2.65 seconds to deallocate network for instance.#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.646 189300 DEBUG oslo_concurrency.lockutils [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.647 189300 DEBUG oslo_concurrency.lockutils [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.691 189300 DEBUG nova.compute.manager [req-a7df87ce-fa80-4ae2-b7e8-a9cde1e27de9 req-24dffacf-9147-44f3-b263-3df6ecd69b7a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Received event network-vif-deleted-e58535aa-0624-4101-bd81-7c3c483d4ac7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.747 189300 DEBUG nova.compute.provider_tree [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.770 189300 DEBUG nova.scheduler.client.report [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.801 189300 DEBUG oslo_concurrency.lockutils [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.861 189300 INFO nova.scheduler.client.report [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Deleted allocations for instance 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c#033[00m
Nov 28 18:20:50 compute-0 nova_compute[189296]: 2025-11-28 18:20:50.957 189300 DEBUG oslo_concurrency.lockutils [None req-4f2a5fce-1e26-42f7-983b-f28c3aaf9d82 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "5e570bcf-69d9-41f4-b621-d75ff7b1bd6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:51 compute-0 nova_compute[189296]: 2025-11-28 18:20:51.401 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.985 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.986 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:20:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:52.003 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 18:20:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:52.004 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/0af9c8e6-8030-462a-9dfd-d52f041685f5 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 18:20:52 compute-0 podman[250748]: 2025-11-28 18:20:52.059560883 +0000 UTC m=+0.095823363 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:20:52 compute-0 podman[250747]: 2025-11-28 18:20:52.063533131 +0000 UTC m=+0.105919613 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:20:52 compute-0 podman[250753]: 2025-11-28 18:20:52.09307995 +0000 UTC m=+0.119489337 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:20:52 compute-0 podman[250749]: 2025-11-28 18:20:52.097076999 +0000 UTC m=+0.139257405 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, version=9.4, release=1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.component=ubi9-container, release-0.7.12=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 18:20:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:52.632 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:52.633 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:52.634 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:52.680 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Fri, 28 Nov 2025 18:20:52 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-7601fc31-622f-422b-8c17-4dff32cc0929 x-openstack-request-id: req-7601fc31-622f-422b-8c17-4dff32cc0929 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 18:20:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:52.681 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "0af9c8e6-8030-462a-9dfd-d52f041685f5", "name": "tempest-TestNetworkBasicOps-server-908375146", "status": "ACTIVE", "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "user_id": "0052e0d91c7e4c98bd11644a4dca818a", "metadata": {}, "hostId": "db02a64df7c1531c96454d720008362fac3509e10007ac2b305d5255", "image": {"id": "ffec9e61-65fb-46ae-8d34-338639229ec3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ffec9e61-65fb-46ae-8d34-338639229ec3"}]}, "flavor": {"id": "b177f611-8f79-4bfd-9a12-e83e9545757b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b177f611-8f79-4bfd-9a12-e83e9545757b"}]}, "created": "2025-11-28T18:18:50Z", "updated": "2025-11-28T18:19:04Z", "addresses": {"tempest-network-smoke--630554822": [{"version": 4, "addr": "10.100.0.9", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:45:0d:59"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/0af9c8e6-8030-462a-9dfd-d52f041685f5"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/0af9c8e6-8030-462a-9dfd-d52f041685f5"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-844617280", "OS-SRV-USG:launched_at": "2025-11-28T18:19:04.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-323845569"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 18:20:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:52.681 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/0af9c8e6-8030-462a-9dfd-d52f041685f5 used request id req-7601fc31-622f-422b-8c17-4dff32cc0929 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 18:20:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:52.682 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0af9c8e6-8030-462a-9dfd-d52f041685f5', 'name': 'tempest-TestNetworkBasicOps-server-908375146', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'c41bbf2b30ca428fbd489c3dc29e8045', 'user_id': '0052e0d91c7e4c98bd11644a4dca818a', 'hostId': 'db02a64df7c1531c96454d720008362fac3509e10007ac2b305d5255', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:20:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:52.684 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 18:20:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:52.685 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/38dd3ba8-0751-41a0-b83f-b49dc0b192c6 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 18:20:52 compute-0 nova_compute[189296]: 2025-11-28 18:20:52.881 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:52 compute-0 nova_compute[189296]: 2025-11-28 18:20:52.995 189300 DEBUG oslo_concurrency.lockutils [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "0af9c8e6-8030-462a-9dfd-d52f041685f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:52 compute-0 nova_compute[189296]: 2025-11-28 18:20:52.996 189300 DEBUG oslo_concurrency.lockutils [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:52 compute-0 nova_compute[189296]: 2025-11-28 18:20:52.996 189300 DEBUG oslo_concurrency.lockutils [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:52 compute-0 nova_compute[189296]: 2025-11-28 18:20:52.997 189300 DEBUG oslo_concurrency.lockutils [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:52 compute-0 nova_compute[189296]: 2025-11-28 18:20:52.997 189300 DEBUG oslo_concurrency.lockutils [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:52 compute-0 nova_compute[189296]: 2025-11-28 18:20:52.999 189300 INFO nova.compute.manager [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Terminating instance#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.000 189300 DEBUG nova.compute.manager [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:20:53 compute-0 kernel: tap7a69f46e-77 (unregistering): left promiscuous mode
Nov 28 18:20:53 compute-0 NetworkManager[56307]: <info>  [1764354053.0395] device (tap7a69f46e-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:20:53 compute-0 ovn_controller[97771]: 2025-11-28T18:20:53Z|00154|binding|INFO|Releasing lport 7a69f46e-77c5-4129-9783-254170a7422b from this chassis (sb_readonly=0)
Nov 28 18:20:53 compute-0 ovn_controller[97771]: 2025-11-28T18:20:53Z|00155|binding|INFO|Setting lport 7a69f46e-77c5-4129-9783-254170a7422b down in Southbound
Nov 28 18:20:53 compute-0 ovn_controller[97771]: 2025-11-28T18:20:53Z|00156|binding|INFO|Removing iface tap7a69f46e-77 ovn-installed in OVS
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.053 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.067 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.080 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:0d:59 10.100.0.9'], port_security=['fa:16:3e:45:0d:59 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0af9c8e6-8030-462a-9dfd-d52f041685f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c41bbf2b30ca428fbd489c3dc29e8045', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56edd6d4-5886-44e5-ba5f-f7a536fc1148', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7149c56-1986-4c48-b442-f7c364e29e84, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=7a69f46e-77c5-4129-9783-254170a7422b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.082 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 7a69f46e-77c5-4129-9783-254170a7422b in datapath 16e2cef3-e4a2-4570-962f-fcbf9f3d2577 unbound from our chassis#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.084 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16e2cef3-e4a2-4570-962f-fcbf9f3d2577, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.089 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[9de8f3cc-2719-497e-88d7-72d655058285]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.090 106624 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577 namespace which is not needed anymore#033[00m
Nov 28 18:20:53 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 28 18:20:53 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 45.567s CPU time.
Nov 28 18:20:53 compute-0 systemd-machined[155703]: Machine qemu-11-instance-0000000b terminated.
Nov 28 18:20:53 compute-0 neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577[249486]: [NOTICE]   (249508) : haproxy version is 2.8.14-c23fe91
Nov 28 18:20:53 compute-0 neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577[249486]: [NOTICE]   (249508) : path to executable is /usr/sbin/haproxy
Nov 28 18:20:53 compute-0 neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577[249486]: [WARNING]  (249508) : Exiting Master process...
Nov 28 18:20:53 compute-0 neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577[249486]: [ALERT]    (249508) : Current worker (249510) exited with code 143 (Terminated)
Nov 28 18:20:53 compute-0 neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577[249486]: [WARNING]  (249508) : All workers exited. Exiting... (0)
Nov 28 18:20:53 compute-0 systemd[1]: libpod-85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73.scope: Deactivated successfully.
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.279 189300 INFO nova.virt.libvirt.driver [-] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Instance destroyed successfully.#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.280 189300 DEBUG nova.objects.instance [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lazy-loading 'resources' on Instance uuid 0af9c8e6-8030-462a-9dfd-d52f041685f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:20:53 compute-0 podman[250840]: 2025-11-28 18:20:53.280715699 +0000 UTC m=+0.080405613 container died 85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73-userdata-shm.mount: Deactivated successfully.
Nov 28 18:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccba47ac12bdffb2848beb1b0c5db2cd405195e097963e9a889133883e65f702-merged.mount: Deactivated successfully.
Nov 28 18:20:53 compute-0 podman[250840]: 2025-11-28 18:20:53.323581166 +0000 UTC m=+0.123271060 container cleanup 85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:20:53 compute-0 systemd[1]: libpod-conmon-85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73.scope: Deactivated successfully.
Nov 28 18:20:53 compute-0 podman[250886]: 2025-11-28 18:20:53.418263622 +0000 UTC m=+0.067262850 container remove 85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.419 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1978 Content-Type: application/json Date: Fri, 28 Nov 2025 18:20:52 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-dd7c3c9b-4e29-4aaf-a7aa-cf24e770f1d7 x-openstack-request-id: req-dd7c3c9b-4e29-4aaf-a7aa-cf24e770f1d7 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.420 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "38dd3ba8-0751-41a0-b83f-b49dc0b192c6", "name": "tempest-ServerActionsTestJSON-server-120148377", "status": "ACTIVE", "tenant_id": "6ebd016d88464c67abefec4da518674a", "user_id": "44a8645b16fc4d99820df9d0c6154195", "metadata": {}, "hostId": "aedc31bf906e86b0c8ec61f8ed65cdb5fd717d4dd664d38af865935b", "image": {"id": "ffec9e61-65fb-46ae-8d34-338639229ec3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ffec9e61-65fb-46ae-8d34-338639229ec3"}]}, "flavor": {"id": "b177f611-8f79-4bfd-9a12-e83e9545757b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b177f611-8f79-4bfd-9a12-e83e9545757b"}]}, "created": "2025-11-28T18:19:04Z", "updated": "2025-11-28T18:20:35Z", "addresses": {"tempest-ServerActionsTestJSON-1305466028-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ad:e5:da"}, {"version": 4, "addr": "192.168.122.217", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ad:e5:da"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/38dd3ba8-0751-41a0-b83f-b49dc0b192c6"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/38dd3ba8-0751-41a0-b83f-b49dc0b192c6"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-957693611", "OS-SRV-USG:launched_at": "2025-11-28T18:19:15.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--473913823"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.420 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/38dd3ba8-0751-41a0-b83f-b49dc0b192c6 used request id req-dd7c3c9b-4e29-4aaf-a7aa-cf24e770f1d7 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.422 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '38dd3ba8-0751-41a0-b83f-b49dc0b192c6', 'name': 'tempest-ServerActionsTestJSON-server-120148377', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ebd016d88464c67abefec4da518674a', 'user_id': '44a8645b16fc4d99820df9d0c6154195', 'hostId': 'aedc31bf906e86b0c8ec61f8ed65cdb5fd717d4dd664d38af865935b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.422 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.422 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.422 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.422 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:20:53.422770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.424 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of disk.device.capacity: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.433 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[597ab85f-b444-4cbb-a630-4c1bbdf85bbd]: (4, ('Fri Nov 28 06:20:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577 (85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73)\n85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73\nFri Nov 28 06:20:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577 (85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73)\n85f477f43ad19c518c80e60a70f5753b575e3995037b544b15a929cb1b782a73\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.436 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[dc0801e3-3524-4ab0-b4b0-fdc7eb75a3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.437 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16e2cef3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.440 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.440 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.441 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.442 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.442 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.442 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.442 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.442 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.442 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 kernel: tap16e2cef3-e0: left promiscuous mode
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:20:53.442773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.445 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of disk.device.read.bytes: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.461 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.463 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[3979b60e-2141-4e58-ab58-2b76cea40ff2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.477 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[05abb2cb-6543-421d-b2b1-4b413afd4461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.479 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[a03c0146-c80e-48e2-b274-06070c029c12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.496 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.497 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[5d56de94-1982-4c56-9965-fc2527cbd839]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508656, 'reachable_time': 18177, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250903, 'error': None, 'target': 'ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.497 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.498 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.498 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.498 106734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16e2cef3-e4a2-4570-962f-fcbf9f3d2577 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 28 18:20:53 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:20:53.499 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[d1823f12-e14b-44a3-8883-add7d9390a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:20:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d16e2cef3\x2de4a2\x2d4570\x2d962f\x2dfcbf9f3d2577.mount: Deactivated successfully.
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.500 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.500 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.500 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.501 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:20:53.501081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.503 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of disk.device.read.latency: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.503 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.read.latency volume: 359220925 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.503 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.read.latency volume: 534193 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.503 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.503 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.504 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.504 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.504 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.504 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:20:53.504379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.505 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of network.incoming.packets.drop: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.509 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 / tap9dd54f15-04 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.509 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.509 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.509 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.509 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.509 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.510 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.510 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:20:53.510213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.511 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of memory.usage: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.539 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.539 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 38dd3ba8-0751-41a0-b83f-b49dc0b192c6: ceilometer.compute.pollsters.NoVolumeException
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.539 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.539 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.539 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.539 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.539 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.540 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:20:53.540039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.541 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of disk.device.usage: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.541 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.541 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.542 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.542 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.542 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.542 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.542 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.542 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:20:53.542875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.543 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of disk.device.write.bytes: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.544 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.544 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.544 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.544 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.544 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.545 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.545 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.545 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:20:53.545303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.546 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of power.state: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.546 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.546 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.546 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.547 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.547 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.547 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.547 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.547 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:20:53.547363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.548 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of disk.device.write.latency: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.548 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.549 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.549 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.549 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.549 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.549 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.549 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.549 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:20:53.549878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.551 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of disk.device.write.requests: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.551 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.551 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.552 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.552 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.552 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.552 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.552 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.552 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.553 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:20:53.552619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.553 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of network.incoming.bytes.delta: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.554 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.554 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.554 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.554 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.554 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.554 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.555 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:20:53.555018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.556 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of network.outgoing.packets.error: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.556 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.556 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.556 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.557 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.557 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.557 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.557 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:20:53.557315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.558 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of cpu: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.558 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/cpu volume: 17480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.559 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.559 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.559 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.559 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.559 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.559 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.559 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.559 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-908375146>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-120148377>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-908375146>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-120148377>]
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.560 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.560 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-28T18:20:53.559710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.560 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.560 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.560 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.561 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.561 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:20:53.560858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.561 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.561 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.561 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.562 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:20:53.562073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.563 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of network.incoming.packets: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.563 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.563 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.564 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.564 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.564 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.564 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.564 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.564 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.564 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.565 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.565 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:20:53.564395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.565 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.565 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.565 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:20:53.565426) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.566 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of network.incoming.packets.error: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.566 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.567 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.567 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.567 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.567 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.567 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.567 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:20:53.567690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.568 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of network.outgoing.bytes: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.568 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.569 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.569 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.569 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.569 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.569 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.569 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.570 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:20:53.569603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.570 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of disk.device.allocation: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.570 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.570 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.571 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.571 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.571 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.571 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.571 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.571 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:20:53.571700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.572 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of network.outgoing.bytes.delta: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.573 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.573 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.573 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.573 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.573 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.573 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.573 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.573 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.573 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-908375146>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-120148377>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-908375146>, <NovaLikeServer: tempest-ServerActionsTestJSON-server-120148377>]
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.574 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.574 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.574 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.574 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.574 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-28T18:20:53.573737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:20:53.574489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.575 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of network.incoming.bytes: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.575 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.575 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.576 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.576 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.576 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.576 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.576 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:20:53.576309) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.577 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of network.outgoing.packets: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.577 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.577 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.577 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.577 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.578 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.578 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.578 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:20:53.578237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.579 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of network.outgoing.packets.drop: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.579 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.579 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.580 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.580 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.580 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.580 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.580 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:20:53.580332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.581 15 DEBUG ceilometer.compute.pollsters [-] Instance 0af9c8e6-8030-462a-9dfd-d52f041685f5 was shut off while getting sample of disk.device.read.requests: Failed to inspect data of instance <name=instance-0000000b, id=0af9c8e6-8030-462a-9dfd-d52f041685f5>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.581 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.581 15 DEBUG ceilometer.compute.pollsters [-] 38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.582 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.582 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.583 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.583 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.583 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.583 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.584 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.584 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.584 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.584 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.584 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.584 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.585 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.586 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.587 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.587 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.587 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:20:53.587 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.618 189300 DEBUG nova.virt.libvirt.vif [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:18:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-908375146',display_name='tempest-TestNetworkBasicOps-server-908375146',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-908375146',id=11,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN6RYNuMt0ux6thdsomjwa4Qs3aHYbmEffy0T9nTP+KpV9lW5YOnUFrYqthp/EVQN7jr7eca+MHb2GG22h2Znvet440rtEqhcxFnCX0g2QQ1dII6j+XnRVx4kNOEKGv/ow==',key_name='tempest-TestNetworkBasicOps-844617280',keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:19:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c41bbf2b30ca428fbd489c3dc29e8045',ramdisk_id='',reservation_id='r-b39009u9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-543144913',owner_user_name='tempest-TestNetworkBasicOps-543144913-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:19:04Z,user_data=None,user_id='0052e0d91c7e4c98bd11644a4dca818a',uuid=0af9c8e6-8030-462a-9dfd-d52f041685f5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.619 189300 DEBUG nova.network.os_vif_util [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converting VIF {"id": "7a69f46e-77c5-4129-9783-254170a7422b", "address": "fa:16:3e:45:0d:59", "network": {"id": "16e2cef3-e4a2-4570-962f-fcbf9f3d2577", "bridge": "br-int", "label": "tempest-network-smoke--630554822", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c41bbf2b30ca428fbd489c3dc29e8045", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a69f46e-77", "ovs_interfaceid": "7a69f46e-77c5-4129-9783-254170a7422b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.620 189300 DEBUG nova.network.os_vif_util [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:0d:59,bridge_name='br-int',has_traffic_filtering=True,id=7a69f46e-77c5-4129-9783-254170a7422b,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a69f46e-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.621 189300 DEBUG os_vif [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:0d:59,bridge_name='br-int',has_traffic_filtering=True,id=7a69f46e-77c5-4129-9783-254170a7422b,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a69f46e-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.623 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.624 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a69f46e-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.625 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.627 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.630 189300 INFO os_vif [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:0d:59,bridge_name='br-int',has_traffic_filtering=True,id=7a69f46e-77c5-4129-9783-254170a7422b,network=Network(16e2cef3-e4a2-4570-962f-fcbf9f3d2577),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a69f46e-77')#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.631 189300 INFO nova.virt.libvirt.driver [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Deleting instance files /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5_del#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.632 189300 INFO nova.virt.libvirt.driver [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Deletion of /var/lib/nova/instances/0af9c8e6-8030-462a-9dfd-d52f041685f5_del complete#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.668 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.751 189300 INFO nova.compute.manager [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.752 189300 DEBUG oslo.service.loopingcall [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.752 189300 DEBUG nova.compute.manager [-] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:20:53 compute-0 nova_compute[189296]: 2025-11-28 18:20:53.753 189300 DEBUG nova.network.neutron [-] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:20:54 compute-0 nova_compute[189296]: 2025-11-28 18:20:54.585 189300 DEBUG nova.network.neutron [-] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:20:54 compute-0 nova_compute[189296]: 2025-11-28 18:20:54.603 189300 INFO nova.compute.manager [-] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Took 0.85 seconds to deallocate network for instance.#033[00m
Nov 28 18:20:54 compute-0 nova_compute[189296]: 2025-11-28 18:20:54.661 189300 DEBUG nova.compute.manager [req-caa94261-2252-4e37-abfd-f7561b866595 req-9a86bade-53d2-4946-bebc-dd120082ff88 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Received event network-vif-deleted-7a69f46e-77c5-4129-9783-254170a7422b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:20:54 compute-0 nova_compute[189296]: 2025-11-28 18:20:54.666 189300 DEBUG oslo_concurrency.lockutils [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:20:54 compute-0 nova_compute[189296]: 2025-11-28 18:20:54.666 189300 DEBUG oslo_concurrency.lockutils [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:20:54 compute-0 nova_compute[189296]: 2025-11-28 18:20:54.729 189300 DEBUG nova.compute.provider_tree [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:20:54 compute-0 nova_compute[189296]: 2025-11-28 18:20:54.743 189300 DEBUG nova.scheduler.client.report [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:20:54 compute-0 nova_compute[189296]: 2025-11-28 18:20:54.763 189300 DEBUG oslo_concurrency.lockutils [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:54 compute-0 nova_compute[189296]: 2025-11-28 18:20:54.788 189300 INFO nova.scheduler.client.report [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Deleted allocations for instance 0af9c8e6-8030-462a-9dfd-d52f041685f5#033[00m
Nov 28 18:20:54 compute-0 nova_compute[189296]: 2025-11-28 18:20:54.842 189300 DEBUG oslo_concurrency.lockutils [None req-137db523-a0ae-4fe8-94e3-3b9ce6aee88f 0052e0d91c7e4c98bd11644a4dca818a c41bbf2b30ca428fbd489c3dc29e8045 - - default default] Lock "0af9c8e6-8030-462a-9dfd-d52f041685f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:20:56 compute-0 podman[250904]: 2025-11-28 18:20:56.117946331 +0000 UTC m=+0.163745149 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:20:56 compute-0 nova_compute[189296]: 2025-11-28 18:20:56.321 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:58 compute-0 ovn_controller[97771]: 2025-11-28T18:20:58Z|00157|binding|INFO|Releasing lport 9f681880-a374-4938-a7d7-30fad6716ed2 from this chassis (sb_readonly=0)
Nov 28 18:20:58 compute-0 nova_compute[189296]: 2025-11-28 18:20:58.104 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:58 compute-0 nova_compute[189296]: 2025-11-28 18:20:58.626 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:58 compute-0 nova_compute[189296]: 2025-11-28 18:20:58.670 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:20:59 compute-0 podman[203494]: time="2025-11-28T18:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:20:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:20:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4787 "" "Go-http-client/1.1"
Nov 28 18:21:01 compute-0 openstack_network_exporter[205632]: ERROR   18:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:21:01 compute-0 openstack_network_exporter[205632]: ERROR   18:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:21:01 compute-0 openstack_network_exporter[205632]: ERROR   18:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:21:01 compute-0 openstack_network_exporter[205632]: ERROR   18:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:21:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:21:01 compute-0 openstack_network_exporter[205632]: ERROR   18:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:21:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:21:02 compute-0 nova_compute[189296]: 2025-11-28 18:21:02.782 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764354047.7797313, 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:21:02 compute-0 nova_compute[189296]: 2025-11-28 18:21:02.782 189300 INFO nova.compute.manager [-] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:21:02 compute-0 nova_compute[189296]: 2025-11-28 18:21:02.836 189300 DEBUG nova.compute.manager [None req-86cb2592-cd7f-4f99-baa1-d8c8ace59db3 - - - - - -] [instance: 5e570bcf-69d9-41f4-b621-d75ff7b1bd6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:21:03 compute-0 nova_compute[189296]: 2025-11-28 18:21:03.628 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:03 compute-0 nova_compute[189296]: 2025-11-28 18:21:03.673 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:04 compute-0 podman[250930]: 2025-11-28 18:21:04.017333935 +0000 UTC m=+0.070940421 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:21:06 compute-0 ovn_controller[97771]: 2025-11-28T18:21:06Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:e5:da 10.100.0.8
Nov 28 18:21:08 compute-0 nova_compute[189296]: 2025-11-28 18:21:08.273 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764354053.2721431, 0af9c8e6-8030-462a-9dfd-d52f041685f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:21:08 compute-0 nova_compute[189296]: 2025-11-28 18:21:08.274 189300 INFO nova.compute.manager [-] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:21:08 compute-0 nova_compute[189296]: 2025-11-28 18:21:08.425 189300 DEBUG nova.compute.manager [None req-8d52bc43-1a76-4f69-ad6f-d5f8e537c544 - - - - - -] [instance: 0af9c8e6-8030-462a-9dfd-d52f041685f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:21:08 compute-0 nova_compute[189296]: 2025-11-28 18:21:08.630 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:08 compute-0 nova_compute[189296]: 2025-11-28 18:21:08.676 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:11 compute-0 ovn_controller[97771]: 2025-11-28T18:21:11Z|00158|binding|INFO|Releasing lport 9f681880-a374-4938-a7d7-30fad6716ed2 from this chassis (sb_readonly=0)
Nov 28 18:21:11 compute-0 nova_compute[189296]: 2025-11-28 18:21:11.128 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:12 compute-0 nova_compute[189296]: 2025-11-28 18:21:12.082 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:13 compute-0 nova_compute[189296]: 2025-11-28 18:21:13.632 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:13 compute-0 nova_compute[189296]: 2025-11-28 18:21:13.678 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:16 compute-0 podman[250959]: 2025-11-28 18:21:16.03815827 +0000 UTC m=+0.086583146 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 28 18:21:16 compute-0 podman[250960]: 2025-11-28 18:21:16.050598077 +0000 UTC m=+0.106399375 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 28 18:21:16 compute-0 podman[250958]: 2025-11-28 18:21:16.082430033 +0000 UTC m=+0.136754754 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, io.buildah.version=1.33.7)
Nov 28 18:21:16 compute-0 nova_compute[189296]: 2025-11-28 18:21:16.308 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:16 compute-0 nova_compute[189296]: 2025-11-28 18:21:16.647 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:21:18 compute-0 nova_compute[189296]: 2025-11-28 18:21:18.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:21:18 compute-0 nova_compute[189296]: 2025-11-28 18:21:18.635 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:18 compute-0 nova_compute[189296]: 2025-11-28 18:21:18.682 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:19 compute-0 nova_compute[189296]: 2025-11-28 18:21:19.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:21:19 compute-0 nova_compute[189296]: 2025-11-28 18:21:19.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:21:20 compute-0 nova_compute[189296]: 2025-11-28 18:21:20.543 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:21:20 compute-0 nova_compute[189296]: 2025-11-28 18:21:20.544 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:21:20 compute-0 nova_compute[189296]: 2025-11-28 18:21:20.544 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:21:22 compute-0 nova_compute[189296]: 2025-11-28 18:21:22.847 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Updating instance_info_cache with network_info: [{"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:21:22 compute-0 nova_compute[189296]: 2025-11-28 18:21:22.870 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-38dd3ba8-0751-41a0-b83f-b49dc0b192c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:21:22 compute-0 nova_compute[189296]: 2025-11-28 18:21:22.871 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:21:22 compute-0 nova_compute[189296]: 2025-11-28 18:21:22.871 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:21:22 compute-0 nova_compute[189296]: 2025-11-28 18:21:22.871 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:21:22 compute-0 nova_compute[189296]: 2025-11-28 18:21:22.872 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:21:23 compute-0 podman[251014]: 2025-11-28 18:21:23.030217136 +0000 UTC m=+0.079224646 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:21:23 compute-0 podman[251016]: 2025-11-28 18:21:23.044266372 +0000 UTC m=+0.092004120 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 28 18:21:23 compute-0 podman[251015]: 2025-11-28 18:21:23.059004075 +0000 UTC m=+0.102182990 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 28 18:21:23 compute-0 podman[251017]: 2025-11-28 18:21:23.072373845 +0000 UTC m=+0.110006874 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 28 18:21:23 compute-0 nova_compute[189296]: 2025-11-28 18:21:23.637 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:23 compute-0 nova_compute[189296]: 2025-11-28 18:21:23.685 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:23 compute-0 nova_compute[189296]: 2025-11-28 18:21:23.752 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:23 compute-0 nova_compute[189296]: 2025-11-28 18:21:23.771 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:24 compute-0 nova_compute[189296]: 2025-11-28 18:21:24.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:21:25 compute-0 nova_compute[189296]: 2025-11-28 18:21:25.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:21:26 compute-0 nova_compute[189296]: 2025-11-28 18:21:26.016 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:26 compute-0 nova_compute[189296]: 2025-11-28 18:21:26.017 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:26 compute-0 nova_compute[189296]: 2025-11-28 18:21:26.017 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:26 compute-0 nova_compute[189296]: 2025-11-28 18:21:26.017 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:21:26 compute-0 nova_compute[189296]: 2025-11-28 18:21:26.592 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:21:26 compute-0 nova_compute[189296]: 2025-11-28 18:21:26.652 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:21:26 compute-0 nova_compute[189296]: 2025-11-28 18:21:26.654 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:21:26 compute-0 nova_compute[189296]: 2025-11-28 18:21:26.707 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.042 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.045 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=72.31216812133789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.047 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.047 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:27 compute-0 podman[251097]: 2025-11-28 18:21:27.048609946 +0000 UTC m=+0.109004760 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.124 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.125 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.125 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.139 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing inventories for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.158 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating ProviderTree inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.158 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.171 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing aggregate associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.200 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing trait associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, traits: HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.237 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.254 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.276 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:21:27 compute-0 nova_compute[189296]: 2025-11-28 18:21:27.277 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:28 compute-0 nova_compute[189296]: 2025-11-28 18:21:28.640 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:28 compute-0 nova_compute[189296]: 2025-11-28 18:21:28.687 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:29 compute-0 nova_compute[189296]: 2025-11-28 18:21:29.278 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:21:29 compute-0 nova_compute[189296]: 2025-11-28 18:21:29.279 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:21:29 compute-0 podman[203494]: time="2025-11-28T18:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:21:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:21:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Nov 28 18:21:31 compute-0 openstack_network_exporter[205632]: ERROR   18:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:21:31 compute-0 openstack_network_exporter[205632]: ERROR   18:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:21:31 compute-0 openstack_network_exporter[205632]: ERROR   18:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:21:31 compute-0 openstack_network_exporter[205632]: ERROR   18:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:21:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:21:31 compute-0 openstack_network_exporter[205632]: ERROR   18:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:21:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:21:33 compute-0 nova_compute[189296]: 2025-11-28 18:21:33.010 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:33 compute-0 nova_compute[189296]: 2025-11-28 18:21:33.163 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:33 compute-0 nova_compute[189296]: 2025-11-28 18:21:33.643 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:33 compute-0 nova_compute[189296]: 2025-11-28 18:21:33.690 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:35 compute-0 podman[251120]: 2025-11-28 18:21:35.019396619 +0000 UTC m=+0.073346350 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:21:38 compute-0 nova_compute[189296]: 2025-11-28 18:21:38.646 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:38 compute-0 nova_compute[189296]: 2025-11-28 18:21:38.692 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:38 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:38.845 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:21:38 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:38.846 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:21:38 compute-0 nova_compute[189296]: 2025-11-28 18:21:38.847 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:42 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:42.849 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.260 189300 DEBUG oslo_concurrency.lockutils [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.261 189300 DEBUG oslo_concurrency.lockutils [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.262 189300 DEBUG oslo_concurrency.lockutils [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.263 189300 DEBUG oslo_concurrency.lockutils [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.264 189300 DEBUG oslo_concurrency.lockutils [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.266 189300 INFO nova.compute.manager [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Terminating instance#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.269 189300 DEBUG nova.compute.manager [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:21:43 compute-0 kernel: tap9dd54f15-04 (unregistering): left promiscuous mode
Nov 28 18:21:43 compute-0 NetworkManager[56307]: <info>  [1764354103.3133] device (tap9dd54f15-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:21:43 compute-0 ovn_controller[97771]: 2025-11-28T18:21:43Z|00159|binding|INFO|Releasing lport 9dd54f15-0412-4387-bc8f-07d1b4702dbb from this chassis (sb_readonly=0)
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.322 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 ovn_controller[97771]: 2025-11-28T18:21:43Z|00160|binding|INFO|Setting lport 9dd54f15-0412-4387-bc8f-07d1b4702dbb down in Southbound
Nov 28 18:21:43 compute-0 ovn_controller[97771]: 2025-11-28T18:21:43Z|00161|binding|INFO|Removing iface tap9dd54f15-04 ovn-installed in OVS
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.327 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.341 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:e5:da 10.100.0.8'], port_security=['fa:16:3e:ad:e5:da 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '38dd3ba8-0751-41a0-b83f-b49dc0b192c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ebd016d88464c67abefec4da518674a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '54c85ea7-0279-4254-b89c-237ccce3cf9e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e84ddcd7-545a-4e48-a6ce-b80b286b2303, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=9dd54f15-0412-4387-bc8f-07d1b4702dbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.342 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 9dd54f15-0412-4387-bc8f-07d1b4702dbb in datapath cecb017f-4e6e-4722-8798-5d73232e6fbd unbound from our chassis#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.343 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cecb017f-4e6e-4722-8798-5d73232e6fbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.344 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7af5d6-0cfb-4565-8da5-b66d6b8b69ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.345 106624 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd namespace which is not needed anymore#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.362 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 28 18:21:43 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Consumed 40.369s CPU time.
Nov 28 18:21:43 compute-0 systemd-machined[155703]: Machine qemu-14-instance-0000000c terminated.
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.455 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.496 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.502 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[250602]: [NOTICE]   (250610) : haproxy version is 2.8.14-c23fe91
Nov 28 18:21:43 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[250602]: [NOTICE]   (250610) : path to executable is /usr/sbin/haproxy
Nov 28 18:21:43 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[250602]: [WARNING]  (250610) : Exiting Master process...
Nov 28 18:21:43 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[250602]: [ALERT]    (250610) : Current worker (250613) exited with code 143 (Terminated)
Nov 28 18:21:43 compute-0 neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd[250602]: [WARNING]  (250610) : All workers exited. Exiting... (0)
Nov 28 18:21:43 compute-0 systemd[1]: libpod-24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3.scope: Deactivated successfully.
Nov 28 18:21:43 compute-0 podman[251166]: 2025-11-28 18:21:43.514053101 +0000 UTC m=+0.050524217 container died 24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.544 189300 INFO nova.virt.libvirt.driver [-] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Instance destroyed successfully.#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.545 189300 DEBUG nova.objects.instance [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lazy-loading 'resources' on Instance uuid 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:21:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3-userdata-shm.mount: Deactivated successfully.
Nov 28 18:21:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfc6d14e26c1377bcea9c4038e667b7a7859f4a781cbacfefef35dd22f2737dc-merged.mount: Deactivated successfully.
Nov 28 18:21:43 compute-0 podman[251166]: 2025-11-28 18:21:43.560338812 +0000 UTC m=+0.096809928 container cleanup 24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.561 189300 DEBUG nova.virt.libvirt.vif [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:19:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-120148377',display_name='tempest-ServerActionsTestJSON-server-120148377',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-120148377',id=12,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNKDhkiMtsztQmvM2gRYqVRTHcsj/9P9Cg/+MCIxNFg5QbGBxNz8mS/LylMSt0qq29jzqRfKycq5Qi4LzakhV4vYbtYARzjXolBVflKv2a5LVTztOBqSNR1wZxrvf10hw==',key_name='tempest-keypair-957693611',keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:19:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ebd016d88464c67abefec4da518674a',ramdisk_id='',reservation_id='r-jl0w8ww4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1827601863',owner_user_name='tempest-ServerActionsTestJSON-1827601863-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:20:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='44a8645b16fc4d99820df9d0c6154195',uuid=38dd3ba8-0751-41a0-b83f-b49dc0b192c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.562 189300 DEBUG nova.network.os_vif_util [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converting VIF {"id": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "address": "fa:16:3e:ad:e5:da", "network": {"id": "cecb017f-4e6e-4722-8798-5d73232e6fbd", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1305466028-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ebd016d88464c67abefec4da518674a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9dd54f15-04", "ovs_interfaceid": "9dd54f15-0412-4387-bc8f-07d1b4702dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.563 189300 DEBUG nova.network.os_vif_util [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.563 189300 DEBUG os_vif [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.565 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.566 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9dd54f15-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.567 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.568 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 systemd[1]: libpod-conmon-24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3.scope: Deactivated successfully.
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.572 189300 INFO os_vif [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:e5:da,bridge_name='br-int',has_traffic_filtering=True,id=9dd54f15-0412-4387-bc8f-07d1b4702dbb,network=Network(cecb017f-4e6e-4722-8798-5d73232e6fbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9dd54f15-04')#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.572 189300 INFO nova.virt.libvirt.driver [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Deleting instance files /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6_del#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.573 189300 INFO nova.virt.libvirt.driver [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Deletion of /var/lib/nova/instances/38dd3ba8-0751-41a0-b83f-b49dc0b192c6_del complete#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.586 189300 DEBUG nova.compute.manager [req-0b17e903-b9a7-48e6-aac6-51ca4867d002 req-5c1c8678-2872-470d-ab96-2a4ddfbbb17a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-vif-unplugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.586 189300 DEBUG oslo_concurrency.lockutils [req-0b17e903-b9a7-48e6-aac6-51ca4867d002 req-5c1c8678-2872-470d-ab96-2a4ddfbbb17a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.587 189300 DEBUG oslo_concurrency.lockutils [req-0b17e903-b9a7-48e6-aac6-51ca4867d002 req-5c1c8678-2872-470d-ab96-2a4ddfbbb17a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.587 189300 DEBUG oslo_concurrency.lockutils [req-0b17e903-b9a7-48e6-aac6-51ca4867d002 req-5c1c8678-2872-470d-ab96-2a4ddfbbb17a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.587 189300 DEBUG nova.compute.manager [req-0b17e903-b9a7-48e6-aac6-51ca4867d002 req-5c1c8678-2872-470d-ab96-2a4ddfbbb17a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] No waiting events found dispatching network-vif-unplugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.587 189300 DEBUG nova.compute.manager [req-0b17e903-b9a7-48e6-aac6-51ca4867d002 req-5c1c8678-2872-470d-ab96-2a4ddfbbb17a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-vif-unplugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.623 189300 INFO nova.compute.manager [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Took 0.35 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.623 189300 DEBUG oslo.service.loopingcall [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.624 189300 DEBUG nova.compute.manager [-] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.625 189300 DEBUG nova.network.neutron [-] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:21:43 compute-0 podman[251210]: 2025-11-28 18:21:43.62877541 +0000 UTC m=+0.044409376 container remove 24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.635 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[554633e5-ee58-42eb-ad27-000a1a8f55da]: (4, ('Fri Nov 28 06:21:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd (24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3)\n24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3\nFri Nov 28 06:21:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd (24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3)\n24707d47a0c29db69a313ba889b68d77711da4958c1f22ddb667d3e6b5a225e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.637 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9a309d-51b6-4a16-ba4e-cadca97a0fd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.639 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcecb017f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.640 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 kernel: tapcecb017f-40: left promiscuous mode
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.648 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e65e7d-618d-4b0b-8a5c-fe58d07019b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.658 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.663 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf7acb9-95c8-4866-8955-9977609437db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.665 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c63f33-6bc5-44ee-a74f-c69d4c160ed0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.682 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a14ed7-e25a-4e1d-90a7-6722a283a636]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518032, 'reachable_time': 38046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251223, 'error': None, 'target': 'ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dcecb017f\x2d4e6e\x2d4722\x2d8798\x2d5d73232e6fbd.mount: Deactivated successfully.
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.685 106734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cecb017f-4e6e-4722-8798-5d73232e6fbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 28 18:21:43 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:43.686 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[7364491e-8d12-4bea-954a-26d069a23f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:43 compute-0 nova_compute[189296]: 2025-11-28 18:21:43.694 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:45 compute-0 nova_compute[189296]: 2025-11-28 18:21:45.685 189300 DEBUG nova.compute.manager [req-eeaeeb33-9ae8-4647-9372-7a4b843c9bbe req-73d13937-37bc-43b2-bf9b-2269d9fedae1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:21:45 compute-0 nova_compute[189296]: 2025-11-28 18:21:45.685 189300 DEBUG oslo_concurrency.lockutils [req-eeaeeb33-9ae8-4647-9372-7a4b843c9bbe req-73d13937-37bc-43b2-bf9b-2269d9fedae1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:45 compute-0 nova_compute[189296]: 2025-11-28 18:21:45.686 189300 DEBUG oslo_concurrency.lockutils [req-eeaeeb33-9ae8-4647-9372-7a4b843c9bbe req-73d13937-37bc-43b2-bf9b-2269d9fedae1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:45 compute-0 nova_compute[189296]: 2025-11-28 18:21:45.686 189300 DEBUG oslo_concurrency.lockutils [req-eeaeeb33-9ae8-4647-9372-7a4b843c9bbe req-73d13937-37bc-43b2-bf9b-2269d9fedae1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:45 compute-0 nova_compute[189296]: 2025-11-28 18:21:45.687 189300 DEBUG nova.compute.manager [req-eeaeeb33-9ae8-4647-9372-7a4b843c9bbe req-73d13937-37bc-43b2-bf9b-2269d9fedae1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] No waiting events found dispatching network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:21:45 compute-0 nova_compute[189296]: 2025-11-28 18:21:45.687 189300 WARNING nova.compute.manager [req-eeaeeb33-9ae8-4647-9372-7a4b843c9bbe req-73d13937-37bc-43b2-bf9b-2269d9fedae1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received unexpected event network-vif-plugged-9dd54f15-0412-4387-bc8f-07d1b4702dbb for instance with vm_state active and task_state deleting.#033[00m
Nov 28 18:21:46 compute-0 nova_compute[189296]: 2025-11-28 18:21:46.700 189300 DEBUG nova.network.neutron [-] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:21:46 compute-0 nova_compute[189296]: 2025-11-28 18:21:46.722 189300 INFO nova.compute.manager [-] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Took 3.10 seconds to deallocate network for instance.#033[00m
Nov 28 18:21:46 compute-0 nova_compute[189296]: 2025-11-28 18:21:46.784 189300 DEBUG oslo_concurrency.lockutils [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:46 compute-0 nova_compute[189296]: 2025-11-28 18:21:46.785 189300 DEBUG oslo_concurrency.lockutils [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:46 compute-0 nova_compute[189296]: 2025-11-28 18:21:46.810 189300 DEBUG nova.compute.manager [req-b7d50d4a-1dca-42ac-bc83-3e18ed03d0eb req-56df589e-ad2b-4286-a26c-4222bf95b705 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Received event network-vif-deleted-9dd54f15-0412-4387-bc8f-07d1b4702dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:21:46 compute-0 nova_compute[189296]: 2025-11-28 18:21:46.889 189300 DEBUG nova.compute.provider_tree [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:21:46 compute-0 nova_compute[189296]: 2025-11-28 18:21:46.906 189300 DEBUG nova.scheduler.client.report [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:21:46 compute-0 nova_compute[189296]: 2025-11-28 18:21:46.942 189300 DEBUG oslo_concurrency.lockutils [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:46 compute-0 nova_compute[189296]: 2025-11-28 18:21:46.994 189300 INFO nova.scheduler.client.report [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Deleted allocations for instance 38dd3ba8-0751-41a0-b83f-b49dc0b192c6#033[00m
Nov 28 18:21:47 compute-0 nova_compute[189296]: 2025-11-28 18:21:47.053 189300 DEBUG oslo_concurrency.lockutils [None req-2c8f479b-710f-4f82-a807-e9a7148fbcff 44a8645b16fc4d99820df9d0c6154195 6ebd016d88464c67abefec4da518674a - - default default] Lock "38dd3ba8-0751-41a0-b83f-b49dc0b192c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:47 compute-0 podman[251224]: 2025-11-28 18:21:47.062716777 +0000 UTC m=+0.103847502 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 28 18:21:47 compute-0 podman[251226]: 2025-11-28 18:21:47.085602212 +0000 UTC m=+0.129344201 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:21:47 compute-0 podman[251225]: 2025-11-28 18:21:47.089017276 +0000 UTC m=+0.129192657 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 28 18:21:48 compute-0 nova_compute[189296]: 2025-11-28 18:21:48.569 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:48 compute-0 nova_compute[189296]: 2025-11-28 18:21:48.697 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:49 compute-0 nova_compute[189296]: 2025-11-28 18:21:49.486 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:51 compute-0 nova_compute[189296]: 2025-11-28 18:21:51.861 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquiring lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:51 compute-0 nova_compute[189296]: 2025-11-28 18:21:51.861 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:51 compute-0 nova_compute[189296]: 2025-11-28 18:21:51.887 189300 DEBUG nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:21:51 compute-0 nova_compute[189296]: 2025-11-28 18:21:51.959 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:51 compute-0 nova_compute[189296]: 2025-11-28 18:21:51.960 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:51 compute-0 nova_compute[189296]: 2025-11-28 18:21:51.969 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:21:51 compute-0 nova_compute[189296]: 2025-11-28 18:21:51.969 189300 INFO nova.compute.claims [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.087 189300 DEBUG nova.compute.provider_tree [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.102 189300 DEBUG nova.scheduler.client.report [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.123 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.124 189300 DEBUG nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.200 189300 DEBUG nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.201 189300 DEBUG nova.network.neutron [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.224 189300 INFO nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.244 189300 DEBUG nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.327 189300 DEBUG nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.329 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.329 189300 INFO nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Creating image(s)#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.330 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquiring lock "/var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.331 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "/var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.332 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "/var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.348 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.409 189300 DEBUG nova.policy [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7197aa467f2241e2a95a2fc057f4d01c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b1d450a53bb64bd7b153b2c9c627f3c1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.423 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.424 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquiring lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.424 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.442 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.501 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.503 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.547 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c,backing_fmt=raw /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.548 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.549 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.606 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.607 189300 DEBUG nova.virt.disk.api [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Checking if we can resize image /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.608 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:21:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:52.633 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:52.633 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:52.633 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.666 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.668 189300 DEBUG nova.virt.disk.api [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Cannot resize image /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.668 189300 DEBUG nova.objects.instance [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b358f92-75c9-4c1b-8a5c-733f8ded1782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.686 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.687 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Ensure instance console log exists: /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.688 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.688 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:52 compute-0 nova_compute[189296]: 2025-11-28 18:21:52.689 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:53 compute-0 nova_compute[189296]: 2025-11-28 18:21:53.572 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:53 compute-0 nova_compute[189296]: 2025-11-28 18:21:53.698 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:53 compute-0 nova_compute[189296]: 2025-11-28 18:21:53.777 189300 DEBUG nova.network.neutron [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Successfully created port: cc026db1-bd40-49d3-8cc6-fd774decc303 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 18:21:54 compute-0 podman[251295]: 2025-11-28 18:21:54.041799835 +0000 UTC m=+0.085139251 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Nov 28 18:21:54 compute-0 podman[251292]: 2025-11-28 18:21:54.047505165 +0000 UTC m=+0.095756112 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:21:54 compute-0 podman[251293]: 2025-11-28 18:21:54.056439015 +0000 UTC m=+0.106620230 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:21:54 compute-0 podman[251294]: 2025-11-28 18:21:54.076735066 +0000 UTC m=+0.123048575 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, release-0.7.12=, com.redhat.component=ubi9-container, distribution-scope=public, vcs-type=git)
Nov 28 18:21:55 compute-0 nova_compute[189296]: 2025-11-28 18:21:55.268 189300 DEBUG nova.network.neutron [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Successfully updated port: cc026db1-bd40-49d3-8cc6-fd774decc303 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:21:55 compute-0 nova_compute[189296]: 2025-11-28 18:21:55.294 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquiring lock "refresh_cache-6b358f92-75c9-4c1b-8a5c-733f8ded1782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:21:55 compute-0 nova_compute[189296]: 2025-11-28 18:21:55.295 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquired lock "refresh_cache-6b358f92-75c9-4c1b-8a5c-733f8ded1782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:21:55 compute-0 nova_compute[189296]: 2025-11-28 18:21:55.295 189300 DEBUG nova.network.neutron [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:21:55 compute-0 nova_compute[189296]: 2025-11-28 18:21:55.409 189300 DEBUG nova.compute.manager [req-99010b2f-92c9-4b6d-92fd-b6b47cb76102 req-a3f52607-2c34-4843-9702-1bd895e29dcb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Received event network-changed-cc026db1-bd40-49d3-8cc6-fd774decc303 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:21:55 compute-0 nova_compute[189296]: 2025-11-28 18:21:55.409 189300 DEBUG nova.compute.manager [req-99010b2f-92c9-4b6d-92fd-b6b47cb76102 req-a3f52607-2c34-4843-9702-1bd895e29dcb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Refreshing instance network info cache due to event network-changed-cc026db1-bd40-49d3-8cc6-fd774decc303. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:21:55 compute-0 nova_compute[189296]: 2025-11-28 18:21:55.410 189300 DEBUG oslo_concurrency.lockutils [req-99010b2f-92c9-4b6d-92fd-b6b47cb76102 req-a3f52607-2c34-4843-9702-1bd895e29dcb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-6b358f92-75c9-4c1b-8a5c-733f8ded1782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:21:55 compute-0 nova_compute[189296]: 2025-11-28 18:21:55.452 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:55 compute-0 nova_compute[189296]: 2025-11-28 18:21:55.496 189300 DEBUG nova.network.neutron [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:21:55 compute-0 nova_compute[189296]: 2025-11-28 18:21:55.714 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.432 189300 DEBUG nova.network.neutron [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Updating instance_info_cache with network_info: [{"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.454 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Releasing lock "refresh_cache-6b358f92-75c9-4c1b-8a5c-733f8ded1782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.454 189300 DEBUG nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Instance network_info: |[{"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.455 189300 DEBUG oslo_concurrency.lockutils [req-99010b2f-92c9-4b6d-92fd-b6b47cb76102 req-a3f52607-2c34-4843-9702-1bd895e29dcb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-6b358f92-75c9-4c1b-8a5c-733f8ded1782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.455 189300 DEBUG nova.network.neutron [req-99010b2f-92c9-4b6d-92fd-b6b47cb76102 req-a3f52607-2c34-4843-9702-1bd895e29dcb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Refreshing network info cache for port cc026db1-bd40-49d3-8cc6-fd774decc303 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.458 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Start _get_guest_xml network_info=[{"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.466 189300 WARNING nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.474 189300 DEBUG nova.virt.libvirt.host [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.474 189300 DEBUG nova.virt.libvirt.host [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.479 189300 DEBUG nova.virt.libvirt.host [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.479 189300 DEBUG nova.virt.libvirt.host [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.480 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.480 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:16:38Z,direct_url=<?>,disk_format='qcow2',id=ffec9e61-65fb-46ae-8d34-338639229ec3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='79ee04b003ca4eb8a045699c7852a8b0',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:16:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.480 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.481 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.481 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.481 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.482 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.482 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.482 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.483 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.483 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.483 189300 DEBUG nova.virt.hardware [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.486 189300 DEBUG nova.virt.libvirt.vif [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:21:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1812090626',display_name='tempest-TestServerBasicOps-server-1812090626',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1812090626',id=14,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBIL3uAOGTo+nMzP3wX27O/PfnsMUHgfu5KskMbB7er4XF35b7mwr0mDblM+CV5ci+6ML/mzE/9nnMD4AGEKYgiWIXSD818xQQvavqp95iXvEMVe2GYwVCN2yCC59qi26A==',key_name='tempest-TestServerBasicOps-1283283664',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b1d450a53bb64bd7b153b2c9c627f3c1',ramdisk_id='',reservation_id='r-wasghw9p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-640022481',owner_user_name='tempest-TestServerBasicOps-640022481-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:21:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7197aa467f2241e2a95a2fc057f4d01c',uuid=6b358f92-75c9-4c1b-8a5c-733f8ded1782,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.487 189300 DEBUG nova.network.os_vif_util [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Converting VIF {"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.487 189300 DEBUG nova.network.os_vif_util [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:73:7d,bridge_name='br-int',has_traffic_filtering=True,id=cc026db1-bd40-49d3-8cc6-fd774decc303,network=Network(ec1293c7-fc62-4fad-8363-d05beea77f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc026db1-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.488 189300 DEBUG nova.objects.instance [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b358f92-75c9-4c1b-8a5c-733f8ded1782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.511 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <uuid>6b358f92-75c9-4c1b-8a5c-733f8ded1782</uuid>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <name>instance-0000000e</name>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <nova:name>tempest-TestServerBasicOps-server-1812090626</nova:name>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:21:56</nova:creationTime>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:21:56 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:21:56 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:21:56 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:21:56 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:21:56 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:21:56 compute-0 nova_compute[189296]:        <nova:user uuid="7197aa467f2241e2a95a2fc057f4d01c">tempest-TestServerBasicOps-640022481-project-member</nova:user>
Nov 28 18:21:56 compute-0 nova_compute[189296]:        <nova:project uuid="b1d450a53bb64bd7b153b2c9c627f3c1">tempest-TestServerBasicOps-640022481</nova:project>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="ffec9e61-65fb-46ae-8d34-338639229ec3"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:21:56 compute-0 nova_compute[189296]:        <nova:port uuid="cc026db1-bd40-49d3-8cc6-fd774decc303">
Nov 28 18:21:56 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <system>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <entry name="serial">6b358f92-75c9-4c1b-8a5c-733f8ded1782</entry>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <entry name="uuid">6b358f92-75c9-4c1b-8a5c-733f8ded1782</entry>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    </system>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <os>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  </os>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <features>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  </features>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.config"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:ca:73:7d"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <target dev="tapcc026db1-bd"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/console.log" append="off"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <video>
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    </video>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:21:56 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:21:56 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:21:56 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:21:56 compute-0 nova_compute[189296]: </domain>
Nov 28 18:21:56 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.512 189300 DEBUG nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Preparing to wait for external event network-vif-plugged-cc026db1-bd40-49d3-8cc6-fd774decc303 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.512 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquiring lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.512 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.513 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.513 189300 DEBUG nova.virt.libvirt.vif [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:21:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1812090626',display_name='tempest-TestServerBasicOps-server-1812090626',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1812090626',id=14,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBIL3uAOGTo+nMzP3wX27O/PfnsMUHgfu5KskMbB7er4XF35b7mwr0mDblM+CV5ci+6ML/mzE/9nnMD4AGEKYgiWIXSD818xQQvavqp95iXvEMVe2GYwVCN2yCC59qi26A==',key_name='tempest-TestServerBasicOps-1283283664',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b1d450a53bb64bd7b153b2c9c627f3c1',ramdisk_id='',reservation_id='r-wasghw9p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-640022481',owner_user_name='tempest-TestServerBasicOps-640022481-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:21:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7197aa467f2241e2a95a2fc057f4d01c',uuid=6b358f92-75c9-4c1b-8a5c-733f8ded1782,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.514 189300 DEBUG nova.network.os_vif_util [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Converting VIF {"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.514 189300 DEBUG nova.network.os_vif_util [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:73:7d,bridge_name='br-int',has_traffic_filtering=True,id=cc026db1-bd40-49d3-8cc6-fd774decc303,network=Network(ec1293c7-fc62-4fad-8363-d05beea77f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc026db1-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.514 189300 DEBUG os_vif [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:73:7d,bridge_name='br-int',has_traffic_filtering=True,id=cc026db1-bd40-49d3-8cc6-fd774decc303,network=Network(ec1293c7-fc62-4fad-8363-d05beea77f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc026db1-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.515 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.515 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.515 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.518 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.518 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc026db1-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.518 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc026db1-bd, col_values=(('external_ids', {'iface-id': 'cc026db1-bd40-49d3-8cc6-fd774decc303', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:73:7d', 'vm-uuid': '6b358f92-75c9-4c1b-8a5c-733f8ded1782'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.520 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.522 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:21:56 compute-0 NetworkManager[56307]: <info>  [1764354116.5218] manager: (tapcc026db1-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.526 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.528 189300 INFO os_vif [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:73:7d,bridge_name='br-int',has_traffic_filtering=True,id=cc026db1-bd40-49d3-8cc6-fd774decc303,network=Network(ec1293c7-fc62-4fad-8363-d05beea77f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc026db1-bd')#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.581 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.582 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.582 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] No VIF found with MAC fa:16:3e:ca:73:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:21:56 compute-0 nova_compute[189296]: 2025-11-28 18:21:56.583 189300 INFO nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Using config drive#033[00m
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.257 189300 INFO nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Creating config drive at /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.config#033[00m
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.270 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqav5kmmq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.401 189300 DEBUG oslo_concurrency.processutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqav5kmmq" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:21:57 compute-0 kernel: tapcc026db1-bd: entered promiscuous mode
Nov 28 18:21:57 compute-0 ovn_controller[97771]: 2025-11-28T18:21:57Z|00162|binding|INFO|Claiming lport cc026db1-bd40-49d3-8cc6-fd774decc303 for this chassis.
Nov 28 18:21:57 compute-0 ovn_controller[97771]: 2025-11-28T18:21:57Z|00163|binding|INFO|cc026db1-bd40-49d3-8cc6-fd774decc303: Claiming fa:16:3e:ca:73:7d 10.100.0.5
Nov 28 18:21:57 compute-0 NetworkManager[56307]: <info>  [1764354117.5044] manager: (tapcc026db1-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.502 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.519 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:73:7d 10.100.0.5'], port_security=['fa:16:3e:ca:73:7d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6b358f92-75c9-4c1b-8a5c-733f8ded1782', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec1293c7-fc62-4fad-8363-d05beea77f1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d450a53bb64bd7b153b2c9c627f3c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ae52902-3d4c-4c2b-9227-2708d93eb132 b9ef8706-c336-4710-abcc-5ba43506f30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c397d12-2b6f-4f0c-a9d3-8b717254aec4, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=cc026db1-bd40-49d3-8cc6-fd774decc303) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.521 106624 INFO neutron.agent.ovn.metadata.agent [-] Port cc026db1-bd40-49d3-8cc6-fd774decc303 in datapath ec1293c7-fc62-4fad-8363-d05beea77f1d bound to our chassis#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.524 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec1293c7-fc62-4fad-8363-d05beea77f1d#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.541 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[f54b130a-19a4-4b3b-8a9f-1875ed1f485a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.542 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapec1293c7-f1 in ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.545 238909 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapec1293c7-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.545 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[9524b869-14ee-49e4-bb1c-e0ea62dc9b7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.546 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[c4728f40-df0b-455a-857e-cabeaa48e0d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.556 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[54dffd03-5ada-41fb-ad59-76652af395c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 systemd-udevd[251405]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:21:57 compute-0 systemd-machined[155703]: New machine qemu-15-instance-0000000e.
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.585 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.590 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:57 compute-0 NetworkManager[56307]: <info>  [1764354117.5921] device (tapcc026db1-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:21:57 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Nov 28 18:21:57 compute-0 NetworkManager[56307]: <info>  [1764354117.5929] device (tapcc026db1-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:21:57 compute-0 ovn_controller[97771]: 2025-11-28T18:21:57Z|00164|binding|INFO|Setting lport cc026db1-bd40-49d3-8cc6-fd774decc303 ovn-installed in OVS
Nov 28 18:21:57 compute-0 ovn_controller[97771]: 2025-11-28T18:21:57Z|00165|binding|INFO|Setting lport cc026db1-bd40-49d3-8cc6-fd774decc303 up in Southbound
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.595 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.597 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7d9fa2-2486-4e58-a4ab-6c9576215d5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.625 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[a909971f-2c61-4f59-938d-0f2296fc3940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 NetworkManager[56307]: <info>  [1764354117.6326] manager: (tapec1293c7-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Nov 28 18:21:57 compute-0 systemd-udevd[251410]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.630 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[40abbaf3-b227-48cd-942f-d672175e53e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 podman[251380]: 2025-11-28 18:21:57.637026941 +0000 UTC m=+0.139283387 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.674 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[afc6ed93-2d49-40e7-a5e7-fab7ad32caf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.678 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[3481c5aa-f5ea-4e5e-b5f8-bd0712f88b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 NetworkManager[56307]: <info>  [1764354117.7025] device (tapec1293c7-f0): carrier: link connected
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.712 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[04a3d9d5-cece-49fc-8160-bd44b2217a78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.735 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb82998-f9db-4c23-bd65-15b4444edcc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec1293c7-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:1b:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526329, 'reachable_time': 32723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251445, 'error': None, 'target': 'ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.754 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[a5448743-6b60-4760-918b-e34597f205d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:1b7d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526329, 'tstamp': 526329}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251446, 'error': None, 'target': 'ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.779 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[63334cb3-75f4-4b9e-ba77-1e5a2c1e329b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec1293c7-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:1b:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526329, 'reachable_time': 32723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251447, 'error': None, 'target': 'ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.818 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[2519db37-d0dd-4f38-948a-a14c6a3e2eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.894 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cb4f57-8419-45c4-ad96-b915329c0865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.896 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec1293c7-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.897 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.898 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec1293c7-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:21:57 compute-0 kernel: tapec1293c7-f0: entered promiscuous mode
Nov 28 18:21:57 compute-0 NetworkManager[56307]: <info>  [1764354117.9020] manager: (tapec1293c7-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.904 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.909 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec1293c7-f0, col_values=(('external_ids', {'iface-id': 'ba6f3dd6-5c75-47b7-8e12-fc0d1c1b7899'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:21:57 compute-0 ovn_controller[97771]: 2025-11-28T18:21:57Z|00166|binding|INFO|Releasing lport ba6f3dd6-5c75-47b7-8e12-fc0d1c1b7899 from this chassis (sb_readonly=0)
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.911 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.915 106624 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ec1293c7-fc62-4fad-8363-d05beea77f1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ec1293c7-fc62-4fad-8363-d05beea77f1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.917 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b189b1-b19b-4907-bdf9-a52fd69a690d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.918 106624 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: global
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    log         /dev/log local0 debug
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    log-tag     haproxy-metadata-proxy-ec1293c7-fc62-4fad-8363-d05beea77f1d
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    user        root
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    group       root
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    maxconn     1024
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    pidfile     /var/lib/neutron/external/pids/ec1293c7-fc62-4fad-8363-d05beea77f1d.pid.haproxy
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    daemon
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: defaults
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    log global
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    mode http
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    option httplog
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    option dontlognull
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    option http-server-close
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    option forwardfor
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    retries                 3
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    timeout http-request    30s
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    timeout connect         30s
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    timeout client          32s
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    timeout server          32s
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    timeout http-keep-alive 30s
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: listen listener
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    bind 169.254.169.254:80
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    server metadata /var/lib/neutron/metadata_proxy
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]:    http-request add-header X-OVN-Network-ID ec1293c7-fc62-4fad-8363-d05beea77f1d
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 28 18:21:57 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:21:57.919 106624 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d', 'env', 'PROCESS_TAG=haproxy-ec1293c7-fc62-4fad-8363-d05beea77f1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ec1293c7-fc62-4fad-8363-d05beea77f1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.924 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.943 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354117.943021, 6b358f92-75c9-4c1b-8a5c-733f8ded1782 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.943 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] VM Started (Lifecycle Event)#033[00m
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.970 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.975 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354117.9431422, 6b358f92-75c9-4c1b-8a5c-733f8ded1782 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:21:57 compute-0 nova_compute[189296]: 2025-11-28 18:21:57.976 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.000 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.005 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.027 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:21:58 compute-0 podman[251485]: 2025-11-28 18:21:58.37901114 +0000 UTC m=+0.070949021 container create 952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:21:58 compute-0 systemd[1]: Started libpod-conmon-952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b.scope.
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.431 189300 DEBUG nova.compute.manager [req-eb7d2401-b679-4409-b349-a0a90d8c1514 req-8974c654-386e-4eb5-87ee-548863690e7e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Received event network-vif-plugged-cc026db1-bd40-49d3-8cc6-fd774decc303 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.433 189300 DEBUG oslo_concurrency.lockutils [req-eb7d2401-b679-4409-b349-a0a90d8c1514 req-8974c654-386e-4eb5-87ee-548863690e7e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.433 189300 DEBUG oslo_concurrency.lockutils [req-eb7d2401-b679-4409-b349-a0a90d8c1514 req-8974c654-386e-4eb5-87ee-548863690e7e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.434 189300 DEBUG oslo_concurrency.lockutils [req-eb7d2401-b679-4409-b349-a0a90d8c1514 req-8974c654-386e-4eb5-87ee-548863690e7e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.434 189300 DEBUG nova.compute.manager [req-eb7d2401-b679-4409-b349-a0a90d8c1514 req-8974c654-386e-4eb5-87ee-548863690e7e 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Processing event network-vif-plugged-cc026db1-bd40-49d3-8cc6-fd774decc303 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.435 189300 DEBUG nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:21:58 compute-0 podman[251485]: 2025-11-28 18:21:58.341928105 +0000 UTC m=+0.033866016 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.440 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354118.4400063, 6b358f92-75c9-4c1b-8a5c-733f8ded1782 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.441 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.444 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.449 189300 INFO nova.virt.libvirt.driver [-] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Instance spawned successfully.#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.450 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:21:58 compute-0 systemd[1]: Started libcrun container.
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.465 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:21:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ed904e887392780f5578d090095ae5b4b882d1845c74e9eb7198835b95b48e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.474 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.481 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.482 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.483 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.483 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:21:58 compute-0 podman[251485]: 2025-11-28 18:21:58.48445823 +0000 UTC m=+0.176396141 container init 952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.485 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.486 189300 DEBUG nova.virt.libvirt.driver [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.494 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:21:58 compute-0 podman[251485]: 2025-11-28 18:21:58.495987664 +0000 UTC m=+0.187925545 container start 952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 28 18:21:58 compute-0 neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d[251500]: [NOTICE]   (251504) : New worker (251506) forked
Nov 28 18:21:58 compute-0 neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d[251500]: [NOTICE]   (251504) : Loading success.
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.540 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764354103.5390167, 38dd3ba8-0751-41a0-b83f-b49dc0b192c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.542 189300 INFO nova.compute.manager [-] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.550 189300 INFO nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Took 6.22 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.551 189300 DEBUG nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.563 189300 DEBUG nova.compute.manager [None req-024ab851-bb7a-4541-a480-4f1c776b4d26 - - - - - -] [instance: 38dd3ba8-0751-41a0-b83f-b49dc0b192c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.613 189300 DEBUG nova.network.neutron [req-99010b2f-92c9-4b6d-92fd-b6b47cb76102 req-a3f52607-2c34-4843-9702-1bd895e29dcb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Updated VIF entry in instance network info cache for port cc026db1-bd40-49d3-8cc6-fd774decc303. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.614 189300 DEBUG nova.network.neutron [req-99010b2f-92c9-4b6d-92fd-b6b47cb76102 req-a3f52607-2c34-4843-9702-1bd895e29dcb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Updating instance_info_cache with network_info: [{"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.618 189300 INFO nova.compute.manager [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Took 6.68 seconds to build instance.#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.638 189300 DEBUG oslo_concurrency.lockutils [req-99010b2f-92c9-4b6d-92fd-b6b47cb76102 req-a3f52607-2c34-4843-9702-1bd895e29dcb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-6b358f92-75c9-4c1b-8a5c-733f8ded1782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.652 189300 DEBUG oslo_concurrency.lockutils [None req-cffe250e-578b-4802-b505-82d0589ecf0e 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:21:58 compute-0 nova_compute[189296]: 2025-11-28 18:21:58.700 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:21:59 compute-0 podman[203494]: time="2025-11-28T18:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:21:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:21:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4777 "" "Go-http-client/1.1"
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.537 189300 DEBUG nova.compute.manager [req-2bf01a85-a91d-4d7d-9bfa-d5212f5897f5 req-fdb50c71-c640-4676-a3e5-9b489574be2a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Received event network-vif-plugged-cc026db1-bd40-49d3-8cc6-fd774decc303 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.539 189300 DEBUG oslo_concurrency.lockutils [req-2bf01a85-a91d-4d7d-9bfa-d5212f5897f5 req-fdb50c71-c640-4676-a3e5-9b489574be2a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.540 189300 DEBUG oslo_concurrency.lockutils [req-2bf01a85-a91d-4d7d-9bfa-d5212f5897f5 req-fdb50c71-c640-4676-a3e5-9b489574be2a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.540 189300 DEBUG oslo_concurrency.lockutils [req-2bf01a85-a91d-4d7d-9bfa-d5212f5897f5 req-fdb50c71-c640-4676-a3e5-9b489574be2a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.541 189300 DEBUG nova.compute.manager [req-2bf01a85-a91d-4d7d-9bfa-d5212f5897f5 req-fdb50c71-c640-4676-a3e5-9b489574be2a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] No waiting events found dispatching network-vif-plugged-cc026db1-bd40-49d3-8cc6-fd774decc303 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.541 189300 WARNING nova.compute.manager [req-2bf01a85-a91d-4d7d-9bfa-d5212f5897f5 req-fdb50c71-c640-4676-a3e5-9b489574be2a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Received unexpected event network-vif-plugged-cc026db1-bd40-49d3-8cc6-fd774decc303 for instance with vm_state active and task_state None.#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.767 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "200bd8bc-d121-4a86-b728-ea98aac95adf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.768 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.788 189300 DEBUG nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.868 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.869 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.876 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:22:00 compute-0 nova_compute[189296]: 2025-11-28 18:22:00.876 189300 INFO nova.compute.claims [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.003 189300 DEBUG nova.compute.provider_tree [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.017 189300 DEBUG nova.scheduler.client.report [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.048 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.049 189300 DEBUG nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.100 189300 DEBUG nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.101 189300 DEBUG nova.network.neutron [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.117 189300 INFO nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.133 189300 DEBUG nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.140 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:01 compute-0 NetworkManager[56307]: <info>  [1764354121.1418] manager: (patch-provnet-564e20d3-e524-48c8-993a-ae41282beadd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 28 18:22:01 compute-0 NetworkManager[56307]: <info>  [1764354121.1449] manager: (patch-br-int-to-provnet-564e20d3-e524-48c8-993a-ae41282beadd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.229 189300 DEBUG nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.231 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.231 189300 INFO nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Creating image(s)#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.232 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "/var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.233 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "/var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.234 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "/var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.234 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.235 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.308 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:01 compute-0 ovn_controller[97771]: 2025-11-28T18:22:01Z|00167|binding|INFO|Releasing lport ba6f3dd6-5c75-47b7-8e12-fc0d1c1b7899 from this chassis (sb_readonly=0)
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.344 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:01 compute-0 openstack_network_exporter[205632]: ERROR   18:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:22:01 compute-0 openstack_network_exporter[205632]: ERROR   18:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:22:01 compute-0 openstack_network_exporter[205632]: ERROR   18:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:22:01 compute-0 openstack_network_exporter[205632]: ERROR   18:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:22:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:22:01 compute-0 openstack_network_exporter[205632]: ERROR   18:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:22:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.521 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:01 compute-0 nova_compute[189296]: 2025-11-28 18:22:01.533 189300 DEBUG nova.policy [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c71a276f38f4bfebf1d3631d6f82966', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.241 189300 DEBUG nova.network.neutron [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Successfully created port: 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.644 189300 DEBUG nova.compute.manager [req-d069a6bd-1052-43b7-bb86-e14c000895b8 req-8b417c16-292f-4850-bfb0-2a971a393ad1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Received event network-changed-cc026db1-bd40-49d3-8cc6-fd774decc303 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.644 189300 DEBUG nova.compute.manager [req-d069a6bd-1052-43b7-bb86-e14c000895b8 req-8b417c16-292f-4850-bfb0-2a971a393ad1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Refreshing instance network info cache due to event network-changed-cc026db1-bd40-49d3-8cc6-fd774decc303. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.645 189300 DEBUG oslo_concurrency.lockutils [req-d069a6bd-1052-43b7-bb86-e14c000895b8 req-8b417c16-292f-4850-bfb0-2a971a393ad1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-6b358f92-75c9-4c1b-8a5c-733f8ded1782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.645 189300 DEBUG oslo_concurrency.lockutils [req-d069a6bd-1052-43b7-bb86-e14c000895b8 req-8b417c16-292f-4850-bfb0-2a971a393ad1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-6b358f92-75c9-4c1b-8a5c-733f8ded1782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.646 189300 DEBUG nova.network.neutron [req-d069a6bd-1052-43b7-bb86-e14c000895b8 req-8b417c16-292f-4850-bfb0-2a971a393ad1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Refreshing network info cache for port cc026db1-bd40-49d3-8cc6-fd774decc303 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.698 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.760 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa.part --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.761 189300 DEBUG nova.virt.images [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] 7d5268e2-45b5-44b2-b3c1-3da9b27b258e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.762 189300 DEBUG nova.privsep.utils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 28 18:22:02 compute-0 nova_compute[189296]: 2025-11-28 18:22:02.763 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa.part /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.013 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa.part /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa.converted" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.019 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.097 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa.converted --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.099 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.112 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.175 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.176 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.177 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.188 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.252 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.254 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa,backing_fmt=raw /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.290 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa,backing_fmt=raw /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.298 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.299 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.353 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.354 189300 DEBUG nova.virt.disk.api [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Checking if we can resize image /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.355 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.411 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.413 189300 DEBUG nova.virt.disk.api [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Cannot resize image /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.413 189300 DEBUG nova.objects.instance [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lazy-loading 'migration_context' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.426 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.426 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Ensure instance console log exists: /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.427 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.428 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.428 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.530 189300 DEBUG nova.network.neutron [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Successfully updated port: 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.554 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.567 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquired lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.568 189300 DEBUG nova.network.neutron [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.667 189300 DEBUG nova.compute.manager [req-8945f3eb-a9e1-4c4b-8ede-adc00efbc835 req-1760ac87-3ab1-4fb0-a049-f5dc159c53d4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Received event network-changed-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.668 189300 DEBUG nova.compute.manager [req-8945f3eb-a9e1-4c4b-8ede-adc00efbc835 req-1760ac87-3ab1-4fb0-a049-f5dc159c53d4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Refreshing instance network info cache due to event network-changed-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.669 189300 DEBUG oslo_concurrency.lockutils [req-8945f3eb-a9e1-4c4b-8ede-adc00efbc835 req-1760ac87-3ab1-4fb0-a049-f5dc159c53d4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.703 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:03 compute-0 nova_compute[189296]: 2025-11-28 18:22:03.758 189300 DEBUG nova.network.neutron [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.757 189300 DEBUG nova.network.neutron [req-d069a6bd-1052-43b7-bb86-e14c000895b8 req-8b417c16-292f-4850-bfb0-2a971a393ad1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Updated VIF entry in instance network info cache for port cc026db1-bd40-49d3-8cc6-fd774decc303. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.758 189300 DEBUG nova.network.neutron [req-d069a6bd-1052-43b7-bb86-e14c000895b8 req-8b417c16-292f-4850-bfb0-2a971a393ad1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Updating instance_info_cache with network_info: [{"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.782 189300 DEBUG oslo_concurrency.lockutils [req-d069a6bd-1052-43b7-bb86-e14c000895b8 req-8b417c16-292f-4850-bfb0-2a971a393ad1 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-6b358f92-75c9-4c1b-8a5c-733f8ded1782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.947 189300 DEBUG nova.network.neutron [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.970 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Releasing lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.971 189300 DEBUG nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Instance network_info: |[{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.972 189300 DEBUG oslo_concurrency.lockutils [req-8945f3eb-a9e1-4c4b-8ede-adc00efbc835 req-1760ac87-3ab1-4fb0-a049-f5dc159c53d4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.973 189300 DEBUG nova.network.neutron [req-8945f3eb-a9e1-4c4b-8ede-adc00efbc835 req-1760ac87-3ab1-4fb0-a049-f5dc159c53d4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Refreshing network info cache for port 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.977 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Start _get_guest_xml network_info=[{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:21:53Z,direct_url=<?>,disk_format='qcow2',id=7d5268e2-45b5-44b2-b3c1-3da9b27b258e,min_disk=0,min_ram=0,name='tempest-scenario-img--853594115',owner='4c71a276f38f4bfebf1d3631d6f82966',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:21:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.986 189300 WARNING nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.992 189300 DEBUG nova.virt.libvirt.host [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:22:04 compute-0 nova_compute[189296]: 2025-11-28 18:22:04.994 189300 DEBUG nova.virt.libvirt.host [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.000 189300 DEBUG nova.virt.libvirt.host [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.001 189300 DEBUG nova.virt.libvirt.host [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.002 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.003 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:21:53Z,direct_url=<?>,disk_format='qcow2',id=7d5268e2-45b5-44b2-b3c1-3da9b27b258e,min_disk=0,min_ram=0,name='tempest-scenario-img--853594115',owner='4c71a276f38f4bfebf1d3631d6f82966',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:21:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.004 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.004 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.005 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.006 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.006 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.007 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.008 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.009 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.010 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.010 189300 DEBUG nova.virt.hardware [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.015 189300 DEBUG nova.virt.libvirt.vif [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw',id=15,image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='a12ef97f-9351-448f-95c7-ab90e2c7b098'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c71a276f38f4bfebf1d3631d6f82966',ramdisk_id='',reservation_id='r-88oymigz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-320555444',owner_user_name='tempest-PrometheusGabbiTest-320555444-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:22:01Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c1f6c07dc6c5400cbf4fa724992b16d3',uuid=200bd8bc-d121-4a86-b728-ea98aac95adf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.016 189300 DEBUG nova.network.os_vif_util [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converting VIF {"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.018 189300 DEBUG nova.network.os_vif_util [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:fd:79,bridge_name='br-int',has_traffic_filtering=True,id=49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49c3cd00-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.019 189300 DEBUG nova.objects.instance [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lazy-loading 'pci_devices' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.034 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <uuid>200bd8bc-d121-4a86-b728-ea98aac95adf</uuid>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <name>instance-0000000f</name>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <nova:name>te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw</nova:name>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:22:04</nova:creationTime>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:22:05 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:22:05 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:22:05 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:22:05 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:22:05 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:22:05 compute-0 nova_compute[189296]:        <nova:user uuid="c1f6c07dc6c5400cbf4fa724992b16d3">tempest-PrometheusGabbiTest-320555444-project-member</nova:user>
Nov 28 18:22:05 compute-0 nova_compute[189296]:        <nova:project uuid="4c71a276f38f4bfebf1d3631d6f82966">tempest-PrometheusGabbiTest-320555444</nova:project>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="7d5268e2-45b5-44b2-b3c1-3da9b27b258e"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:22:05 compute-0 nova_compute[189296]:        <nova:port uuid="49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7">
Nov 28 18:22:05 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.2.67" ipVersion="4"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <system>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <entry name="serial">200bd8bc-d121-4a86-b728-ea98aac95adf</entry>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <entry name="uuid">200bd8bc-d121-4a86-b728-ea98aac95adf</entry>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    </system>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <os>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  </os>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <features>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  </features>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk.config"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:c6:fd:79"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <target dev="tap49c3cd00-3b"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/console.log" append="off"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <video>
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    </video>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:22:05 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:22:05 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:22:05 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:22:05 compute-0 nova_compute[189296]: </domain>
Nov 28 18:22:05 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.046 189300 DEBUG nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Preparing to wait for external event network-vif-plugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.047 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.047 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.047 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.048 189300 DEBUG nova.virt.libvirt.vif [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw',id=15,image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='a12ef97f-9351-448f-95c7-ab90e2c7b098'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c71a276f38f4bfebf1d3631d6f82966',ramdisk_id='',reservation_id='r-88oymigz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-320555444',owner_user_name='tempest-PrometheusGabbiTest-320555444-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:22:01Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c1f6c07dc6c5400cbf4fa724992b16d3',uuid=200bd8bc-d121-4a86-b728-ea98aac95adf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.048 189300 DEBUG nova.network.os_vif_util [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converting VIF {"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.049 189300 DEBUG nova.network.os_vif_util [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:fd:79,bridge_name='br-int',has_traffic_filtering=True,id=49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49c3cd00-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.049 189300 DEBUG os_vif [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:fd:79,bridge_name='br-int',has_traffic_filtering=True,id=49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49c3cd00-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.050 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.050 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.051 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.054 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.054 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49c3cd00-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.055 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap49c3cd00-3b, col_values=(('external_ids', {'iface-id': '49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:fd:79', 'vm-uuid': '200bd8bc-d121-4a86-b728-ea98aac95adf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.056 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:05 compute-0 NetworkManager[56307]: <info>  [1764354125.0588] manager: (tap49c3cd00-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.059 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.065 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.066 189300 INFO os_vif [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:fd:79,bridge_name='br-int',has_traffic_filtering=True,id=49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49c3cd00-3b')#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.130 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.130 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.130 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] No VIF found with MAC fa:16:3e:c6:fd:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.131 189300 INFO nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Using config drive#033[00m
Nov 28 18:22:05 compute-0 podman[251546]: 2025-11-28 18:22:05.160556455 +0000 UTC m=+0.062149603 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.485 189300 INFO nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Creating config drive at /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk.config#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.490 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvj4t71nj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.615 189300 DEBUG oslo_concurrency.processutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvj4t71nj" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:05 compute-0 kernel: tap49c3cd00-3b: entered promiscuous mode
Nov 28 18:22:05 compute-0 NetworkManager[56307]: <info>  [1764354125.6768] manager: (tap49c3cd00-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.681 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:05 compute-0 ovn_controller[97771]: 2025-11-28T18:22:05Z|00168|binding|INFO|Claiming lport 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 for this chassis.
Nov 28 18:22:05 compute-0 ovn_controller[97771]: 2025-11-28T18:22:05Z|00169|binding|INFO|49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7: Claiming fa:16:3e:c6:fd:79 10.100.2.67
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.689 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:fd:79 10.100.2.67'], port_security=['fa:16:3e:c6:fd:79 10.100.2.67'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.67/16', 'neutron:device_id': '200bd8bc-d121-4a86-b728-ea98aac95adf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c71a276f38f4bfebf1d3631d6f82966', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b7e19568-d693-4981-82d8-a6cf61584030', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21fa20d8-e3c8-4e6c-a5e8-bb4e198483f9, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.691 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 in datapath a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5 bound to our chassis#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.710 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5#033[00m
Nov 28 18:22:05 compute-0 systemd-udevd[251585]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:22:05 compute-0 NetworkManager[56307]: <info>  [1764354125.7412] device (tap49c3cd00-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.744 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[9c72c951-099b-406c-b74f-1be6b06f47b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 NetworkManager[56307]: <info>  [1764354125.7475] device (tap49c3cd00-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.745 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa60c0580-51 in ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 28 18:22:05 compute-0 systemd-machined[155703]: New machine qemu-16-instance-0000000f.
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.747 238909 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa60c0580-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.748 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[b3de150f-28f6-4f0e-a30e-323ed72dea7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.749 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d5a68c-165d-4788-8af5-019621048fa9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.758 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:05 compute-0 ovn_controller[97771]: 2025-11-28T18:22:05Z|00170|binding|INFO|Setting lport 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 ovn-installed in OVS
Nov 28 18:22:05 compute-0 ovn_controller[97771]: 2025-11-28T18:22:05Z|00171|binding|INFO|Setting lport 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 up in Southbound
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.761 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.761 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[fd53a350-86f6-4f08-9259-3535ac0747d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.793 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[80ed62e7-b4d4-4cbe-8dad-0e089343b3a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.819 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[2db62a51-5e07-4bf0-8ce5-2d03f5ff6490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.826 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[f350f3db-5ca2-4bd5-a2cc-2bd01783edb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 NetworkManager[56307]: <info>  [1764354125.8292] manager: (tapa60c0580-50): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.869 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[348d08c5-ff1f-4053-a7bd-6a2f6af44794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.873 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[1518f8f1-d8b2-4e2d-a3cf-fd28b7874476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 NetworkManager[56307]: <info>  [1764354125.8975] device (tapa60c0580-50): carrier: link connected
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.904 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[063a7a4f-47c9-42bd-8caf-60231b14b054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.923 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[03ed3fc6-460f-4d1a-97ce-195c637867ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa60c0580-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:11:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527149, 'reachable_time': 40458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251621, 'error': None, 'target': 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.939 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[9d45267e-75c1-431e-a7f8-08453f496d43]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed1:1176'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527149, 'tstamp': 527149}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251622, 'error': None, 'target': 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:05.958 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[6a104d01-aba9-4fa2-a527-e28e01d20d9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa60c0580-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:11:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527149, 'reachable_time': 40458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251624, 'error': None, 'target': 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.974 189300 DEBUG nova.compute.manager [req-acebee05-9a46-4ddb-bb39-e56104746e26 req-d84e250f-addc-4153-9442-d39d3e800980 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Received event network-vif-plugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.974 189300 DEBUG oslo_concurrency.lockutils [req-acebee05-9a46-4ddb-bb39-e56104746e26 req-d84e250f-addc-4153-9442-d39d3e800980 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.975 189300 DEBUG oslo_concurrency.lockutils [req-acebee05-9a46-4ddb-bb39-e56104746e26 req-d84e250f-addc-4153-9442-d39d3e800980 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.975 189300 DEBUG oslo_concurrency.lockutils [req-acebee05-9a46-4ddb-bb39-e56104746e26 req-d84e250f-addc-4153-9442-d39d3e800980 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:05 compute-0 nova_compute[189296]: 2025-11-28 18:22:05.975 189300 DEBUG nova.compute.manager [req-acebee05-9a46-4ddb-bb39-e56104746e26 req-d84e250f-addc-4153-9442-d39d3e800980 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Processing event network-vif-plugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:06.002 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe0e7e3-6b47-42a3-bffe-25cf840dbbf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.057 189300 DEBUG nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.060 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354126.0596836, 200bd8bc-d121-4a86-b728-ea98aac95adf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.060 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] VM Started (Lifecycle Event)#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.063 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.068 189300 INFO nova.virt.libvirt.driver [-] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Instance spawned successfully.#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.069 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:06.083 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[379e3615-2cbf-45ac-9a3d-d9dbb498757a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:06.084 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa60c0580-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:06.085 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:06.085 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa60c0580-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:22:06 compute-0 NetworkManager[56307]: <info>  [1764354126.0886] manager: (tapa60c0580-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 28 18:22:06 compute-0 kernel: tapa60c0580-50: entered promiscuous mode
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.093 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.094 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:06.102 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa60c0580-50, col_values=(('external_ids', {'iface-id': '29b269a8-673c-48a9-bc1f-c180355b2c1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:22:06 compute-0 ovn_controller[97771]: 2025-11-28T18:22:06Z|00172|binding|INFO|Releasing lport 29b269a8-673c-48a9-bc1f-c180355b2c1b from this chassis (sb_readonly=0)
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.104 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.106 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.111 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:06.114 106624 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.115 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.116 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.116 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:06.116 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[db6e6770-2883-423c-89de-acdb8682183b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:06.117 106624 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: global
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    log         /dev/log local0 debug
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    log-tag     haproxy-metadata-proxy-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    user        root
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    group       root
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    maxconn     1024
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    pidfile     /var/lib/neutron/external/pids/a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5.pid.haproxy
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    daemon
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: defaults
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    log global
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    mode http
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    option httplog
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    option dontlognull
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    option http-server-close
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    option forwardfor
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    retries                 3
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    timeout http-request    30s
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    timeout connect         30s
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    timeout client          32s
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    timeout server          32s
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    timeout http-keep-alive 30s
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: listen listener
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    bind 169.254.169.254:80
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    server metadata /var/lib/neutron/metadata_proxy
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]:    http-request add-header X-OVN-Network-ID a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 28 18:22:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:06.117 106624 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'env', 'PROCESS_TAG=haproxy-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.120 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.120 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.121 189300 DEBUG nova.virt.libvirt.driver [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.124 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.128 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.129 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354126.0597882, 200bd8bc-d121-4a86-b728-ea98aac95adf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.129 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.157 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.162 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354126.0649884, 200bd8bc-d121-4a86-b728-ea98aac95adf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.163 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.189 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.194 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.198 189300 INFO nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Took 4.97 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.198 189300 DEBUG nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.206 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.256 189300 INFO nova.compute.manager [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Took 5.41 seconds to build instance.#033[00m
Nov 28 18:22:06 compute-0 nova_compute[189296]: 2025-11-28 18:22:06.318 189300 DEBUG oslo_concurrency.lockutils [None req-093ed123-18f6-4052-a1a7-d19efe109b65 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:06 compute-0 podman[251661]: 2025-11-28 18:22:06.60427397 +0000 UTC m=+0.099489665 container create e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 28 18:22:06 compute-0 podman[251661]: 2025-11-28 18:22:06.557168828 +0000 UTC m=+0.052384523 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 28 18:22:06 compute-0 systemd[1]: Started libpod-conmon-e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1.scope.
Nov 28 18:22:06 compute-0 systemd[1]: Started libcrun container.
Nov 28 18:22:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279c700c0206dec5b3a1826f0e1709abd6d463665464c7069a38e39e3b74981d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 28 18:22:06 compute-0 podman[251661]: 2025-11-28 18:22:06.715432912 +0000 UTC m=+0.210648607 container init e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 28 18:22:06 compute-0 podman[251661]: 2025-11-28 18:22:06.728066213 +0000 UTC m=+0.223281888 container start e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 28 18:22:06 compute-0 neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5[251677]: [NOTICE]   (251681) : New worker (251683) forked
Nov 28 18:22:06 compute-0 neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5[251677]: [NOTICE]   (251681) : Loading success.
Nov 28 18:22:07 compute-0 nova_compute[189296]: 2025-11-28 18:22:07.264 189300 DEBUG nova.network.neutron [req-8945f3eb-a9e1-4c4b-8ede-adc00efbc835 req-1760ac87-3ab1-4fb0-a049-f5dc159c53d4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updated VIF entry in instance network info cache for port 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:22:07 compute-0 nova_compute[189296]: 2025-11-28 18:22:07.266 189300 DEBUG nova.network.neutron [req-8945f3eb-a9e1-4c4b-8ede-adc00efbc835 req-1760ac87-3ab1-4fb0-a049-f5dc159c53d4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:22:07 compute-0 nova_compute[189296]: 2025-11-28 18:22:07.282 189300 DEBUG oslo_concurrency.lockutils [req-8945f3eb-a9e1-4c4b-8ede-adc00efbc835 req-1760ac87-3ab1-4fb0-a049-f5dc159c53d4 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:22:08 compute-0 nova_compute[189296]: 2025-11-28 18:22:08.073 189300 DEBUG nova.compute.manager [req-73298cc4-8518-4021-84d5-1cc3ff159fb4 req-84caf721-492b-4cf4-b977-e32a636f279c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Received event network-vif-plugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:22:08 compute-0 nova_compute[189296]: 2025-11-28 18:22:08.077 189300 DEBUG oslo_concurrency.lockutils [req-73298cc4-8518-4021-84d5-1cc3ff159fb4 req-84caf721-492b-4cf4-b977-e32a636f279c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:08 compute-0 nova_compute[189296]: 2025-11-28 18:22:08.078 189300 DEBUG oslo_concurrency.lockutils [req-73298cc4-8518-4021-84d5-1cc3ff159fb4 req-84caf721-492b-4cf4-b977-e32a636f279c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:08 compute-0 nova_compute[189296]: 2025-11-28 18:22:08.079 189300 DEBUG oslo_concurrency.lockutils [req-73298cc4-8518-4021-84d5-1cc3ff159fb4 req-84caf721-492b-4cf4-b977-e32a636f279c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:08 compute-0 nova_compute[189296]: 2025-11-28 18:22:08.080 189300 DEBUG nova.compute.manager [req-73298cc4-8518-4021-84d5-1cc3ff159fb4 req-84caf721-492b-4cf4-b977-e32a636f279c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] No waiting events found dispatching network-vif-plugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:22:08 compute-0 nova_compute[189296]: 2025-11-28 18:22:08.081 189300 WARNING nova.compute.manager [req-73298cc4-8518-4021-84d5-1cc3ff159fb4 req-84caf721-492b-4cf4-b977-e32a636f279c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Received unexpected event network-vif-plugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 for instance with vm_state active and task_state None.#033[00m
Nov 28 18:22:08 compute-0 nova_compute[189296]: 2025-11-28 18:22:08.705 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:10 compute-0 nova_compute[189296]: 2025-11-28 18:22:10.057 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:13 compute-0 nova_compute[189296]: 2025-11-28 18:22:13.708 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:15 compute-0 nova_compute[189296]: 2025-11-28 18:22:15.061 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:16 compute-0 nova_compute[189296]: 2025-11-28 18:22:16.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:22:18 compute-0 podman[251692]: 2025-11-28 18:22:18.020427783 +0000 UTC m=+0.082849754 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, vcs-type=git, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41)
Nov 28 18:22:18 compute-0 podman[251693]: 2025-11-28 18:22:18.023249533 +0000 UTC m=+0.085442508 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=f26160204c78771e78cdd2489258319b, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 28 18:22:18 compute-0 podman[251694]: 2025-11-28 18:22:18.030221575 +0000 UTC m=+0.087534421 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 28 18:22:18 compute-0 nova_compute[189296]: 2025-11-28 18:22:18.711 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:19 compute-0 nova_compute[189296]: 2025-11-28 18:22:19.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:22:19 compute-0 nova_compute[189296]: 2025-11-28 18:22:19.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:22:20 compute-0 nova_compute[189296]: 2025-11-28 18:22:20.065 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:20 compute-0 nova_compute[189296]: 2025-11-28 18:22:20.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:22:20 compute-0 nova_compute[189296]: 2025-11-28 18:22:20.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:22:20 compute-0 nova_compute[189296]: 2025-11-28 18:22:20.664 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 18:22:20 compute-0 nova_compute[189296]: 2025-11-28 18:22:20.667 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:22:20 compute-0 nova_compute[189296]: 2025-11-28 18:22:20.668 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:22:23 compute-0 nova_compute[189296]: 2025-11-28 18:22:23.713 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:25 compute-0 podman[251748]: 2025-11-28 18:22:25.047464783 +0000 UTC m=+0.105138214 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 28 18:22:25 compute-0 podman[251747]: 2025-11-28 18:22:25.05177299 +0000 UTC m=+0.111871430 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:22:25 compute-0 podman[251749]: 2025-11-28 18:22:25.065545019 +0000 UTC m=+0.117792046 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.buildah.version=1.29.0, config_id=edpm, distribution-scope=public, name=ubi9, io.openshift.expose-services=, vendor=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, release=1214.1726694543)
Nov 28 18:22:25 compute-0 nova_compute[189296]: 2025-11-28 18:22:25.067 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:25 compute-0 podman[251750]: 2025-11-28 18:22:25.084489276 +0000 UTC m=+0.119100468 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Nov 28 18:22:26 compute-0 nova_compute[189296]: 2025-11-28 18:22:26.628 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:22:27 compute-0 nova_compute[189296]: 2025-11-28 18:22:27.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:22:27 compute-0 nova_compute[189296]: 2025-11-28 18:22:27.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:22:27 compute-0 nova_compute[189296]: 2025-11-28 18:22:27.682 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:27 compute-0 nova_compute[189296]: 2025-11-28 18:22:27.683 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:27 compute-0 nova_compute[189296]: 2025-11-28 18:22:27.684 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:27 compute-0 nova_compute[189296]: 2025-11-28 18:22:27.684 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:22:27 compute-0 podman[251823]: 2025-11-28 18:22:27.917628696 +0000 UTC m=+0.163575894 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.053 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.114 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.116 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.179 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.188 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.249 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.250 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.313 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.714 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.724 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.726 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5020MB free_disk=72.30593490600586GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.726 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.727 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.813 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 6b358f92-75c9-4c1b-8a5c-733f8ded1782 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.814 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.815 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.816 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.892 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.912 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.936 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:22:28 compute-0 nova_compute[189296]: 2025-11-28 18:22:28.936 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:29 compute-0 podman[203494]: time="2025-11-28T18:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:22:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30755 "" "Go-http-client/1.1"
Nov 28 18:22:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5252 "" "Go-http-client/1.1"
Nov 28 18:22:30 compute-0 nova_compute[189296]: 2025-11-28 18:22:30.071 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:30 compute-0 ovn_controller[97771]: 2025-11-28T18:22:30Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ca:73:7d 10.100.0.5
Nov 28 18:22:30 compute-0 ovn_controller[97771]: 2025-11-28T18:22:30Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ca:73:7d 10.100.0.5
Nov 28 18:22:31 compute-0 openstack_network_exporter[205632]: ERROR   18:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:22:31 compute-0 openstack_network_exporter[205632]: ERROR   18:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:22:31 compute-0 openstack_network_exporter[205632]: ERROR   18:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:22:31 compute-0 openstack_network_exporter[205632]: ERROR   18:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:22:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:22:31 compute-0 openstack_network_exporter[205632]: ERROR   18:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:22:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:22:31 compute-0 nova_compute[189296]: 2025-11-28 18:22:31.937 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:22:31 compute-0 nova_compute[189296]: 2025-11-28 18:22:31.960 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:22:33 compute-0 nova_compute[189296]: 2025-11-28 18:22:33.716 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:35 compute-0 nova_compute[189296]: 2025-11-28 18:22:35.077 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:36 compute-0 podman[251865]: 2025-11-28 18:22:36.073963058 +0000 UTC m=+0.118957885 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:22:38 compute-0 nova_compute[189296]: 2025-11-28 18:22:38.719 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:39 compute-0 ovn_controller[97771]: 2025-11-28T18:22:39Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:fd:79 10.100.2.67
Nov 28 18:22:39 compute-0 ovn_controller[97771]: 2025-11-28T18:22:39Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:fd:79 10.100.2.67
Nov 28 18:22:40 compute-0 nova_compute[189296]: 2025-11-28 18:22:40.080 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:40 compute-0 nova_compute[189296]: 2025-11-28 18:22:40.813 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:40 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:40.812 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:22:40 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:40.814 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:22:40 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:40.815 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:22:43 compute-0 nova_compute[189296]: 2025-11-28 18:22:43.721 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:45 compute-0 nova_compute[189296]: 2025-11-28 18:22:45.083 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:48 compute-0 nova_compute[189296]: 2025-11-28 18:22:48.729 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:49 compute-0 podman[251897]: 2025-11-28 18:22:49.05765569 +0000 UTC m=+0.104039896 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 28 18:22:49 compute-0 podman[251898]: 2025-11-28 18:22:49.069227705 +0000 UTC m=+0.098857258 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm)
Nov 28 18:22:49 compute-0 podman[251902]: 2025-11-28 18:22:49.080333299 +0000 UTC m=+0.100891509 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 28 18:22:50 compute-0 nova_compute[189296]: 2025-11-28 18:22:50.089 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.986 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.987 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.987 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.994 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 6b358f92-75c9-4c1b-8a5c-733f8ded1782 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.995 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/6b358f92-75c9-4c1b-8a5c-733f8ded1782 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.502 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2082 Content-Type: application/json Date: Fri, 28 Nov 2025 18:22:52 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-027e3dd3-25c5-4775-82ca-f46eab7bc2d1 x-openstack-request-id: req-027e3dd3-25c5-4775-82ca-f46eab7bc2d1 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.502 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "6b358f92-75c9-4c1b-8a5c-733f8ded1782", "name": "tempest-TestServerBasicOps-server-1812090626", "status": "ACTIVE", "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "user_id": "7197aa467f2241e2a95a2fc057f4d01c", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "bdeda39aaf577e86647841592c7d9917386cfdc963e9ea507cd293e1", "image": {"id": "ffec9e61-65fb-46ae-8d34-338639229ec3", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ffec9e61-65fb-46ae-8d34-338639229ec3"}]}, "flavor": {"id": "b177f611-8f79-4bfd-9a12-e83e9545757b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b177f611-8f79-4bfd-9a12-e83e9545757b"}]}, "created": "2025-11-28T18:21:51Z", "updated": "2025-11-28T18:21:58Z", "addresses": {"tempest-TestServerBasicOps-9270562-network": [{"version": 4, "addr": "10.100.0.5", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ca:73:7d"}, {"version": 4, "addr": "192.168.122.247", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ca:73:7d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/6b358f92-75c9-4c1b-8a5c-733f8ded1782"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/6b358f92-75c9-4c1b-8a5c-733f8ded1782"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-1283283664", "OS-SRV-USG:launched_at": "2025-11-28T18:21:58.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--2128624218"}, {"name": "tempest-secgroup-smoke-1174684827"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.503 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/6b358f92-75c9-4c1b-8a5c-733f8ded1782 used request id req-027e3dd3-25c5-4775-82ca-f46eab7bc2d1 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.504 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '6b358f92-75c9-4c1b-8a5c-733f8ded1782', 'name': 'tempest-TestServerBasicOps-server-1812090626', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ffec9e61-65fb-46ae-8d34-338639229ec3'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b1d450a53bb64bd7b153b2c9c627f3c1', 'user_id': '7197aa467f2241e2a95a2fc057f4d01c', 'hostId': 'bdeda39aaf577e86647841592c7d9917386cfdc963e9ea507cd293e1', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.508 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 200bd8bc-d121-4a86-b728-ea98aac95adf from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.509 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/200bd8bc-d121-4a86-b728-ea98aac95adf -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 18:22:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:52.634 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:22:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:52.635 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:22:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:22:52.635 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.938 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Fri, 28 Nov 2025 18:22:52 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-eaf009b4-f69f-4f98-a177-256e31149740 x-openstack-request-id: req-eaf009b4-f69f-4f98-a177-256e31149740 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.938 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "200bd8bc-d121-4a86-b728-ea98aac95adf", "name": "te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw", "status": "ACTIVE", "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "user_id": "c1f6c07dc6c5400cbf4fa724992b16d3", "metadata": {"metering.server_group": "a12ef97f-9351-448f-95c7-ab90e2c7b098"}, "hostId": "d63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d", "image": {"id": "7d5268e2-45b5-44b2-b3c1-3da9b27b258e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7d5268e2-45b5-44b2-b3c1-3da9b27b258e"}]}, "flavor": {"id": "b177f611-8f79-4bfd-9a12-e83e9545757b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b177f611-8f79-4bfd-9a12-e83e9545757b"}]}, "created": "2025-11-28T18:22:00Z", "updated": "2025-11-28T18:22:06Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.67", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c6:fd:79"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/200bd8bc-d121-4a86-b728-ea98aac95adf"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/200bd8bc-d121-4a86-b728-ea98aac95adf"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-28T18:22:06.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.938 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/200bd8bc-d121-4a86-b728-ea98aac95adf used request id req-eaf009b4-f69f-4f98-a177-256e31149740 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.939 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '200bd8bc-d121-4a86-b728-ea98aac95adf', 'name': 'te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.940 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.940 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.940 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.940 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:22:52.940692) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.962 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.963 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.979 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.980 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.980 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.980 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.980 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.980 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.980 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:52.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:22:52.980817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.038 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.read.bytes volume: 30304768 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.038 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.084 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.084 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.085 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.085 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.085 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.085 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.085 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.085 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.085 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.read.latency volume: 1076428338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.086 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.read.latency volume: 54495777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.086 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 562549638 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.086 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 45170226 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.087 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.087 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.087 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.087 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.087 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.087 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:22:53.085791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:22:53.087828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.091 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 6b358f92-75c9-4c1b-8a5c-733f8ded1782 / tapcc026db1-bd inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.091 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.095 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 200bd8bc-d121-4a86-b728-ea98aac95adf / tap49c3cd00-3b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.095 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.095 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.096 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.096 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.096 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.096 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:22:53.096397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.132 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/memory.usage volume: 46.84375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.160 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/memory.usage volume: 43.53515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.161 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.161 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.161 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.161 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.161 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.161 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.162 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.162 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.162 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.162 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.163 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.163 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.163 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.163 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.163 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.163 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.write.bytes volume: 72937472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.163 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.163 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 72810496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.164 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.164 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.164 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.164 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.164 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.164 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.164 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.165 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.165 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.165 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.165 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.165 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.165 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.165 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.165 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.165 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.write.latency volume: 3236838754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.166 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.166 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 2347449202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.166 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.166 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.167 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.167 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.167 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.167 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.167 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.167 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.write.requests volume: 316 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.167 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.167 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 299 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.168 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.168 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.168 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.168 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.168 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.168 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.168 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.168 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.169 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.169 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.169 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.169 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.169 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.169 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.169 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.169 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.170 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.170 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.170 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.170 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.170 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.170 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.170 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.171 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/cpu volume: 31900000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.171 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/cpu volume: 45560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.171 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.171 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.171 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.171 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.171 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.171 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.172 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.172 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1812090626>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1812090626>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.172 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.172 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.172 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.172 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.172 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.173 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.173 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.173 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.173 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.173 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.173 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.173 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.173 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.174 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.174 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.174 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.174 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.174 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.174 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.174 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.175 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.175 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.175 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.175 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.175 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.175 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.175 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.175 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.176 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.176 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.176 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.176 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.176 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.176 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.176 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.176 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.174 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:22:53.161864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:22:53.163395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:22:53.164901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:22:53.165887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:22:53.167326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:22:53.168801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.178 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.178 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.178 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.178 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.178 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.178 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.178 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.178 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.179 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.179 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.179 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.179 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.179 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.179 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.179 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.180 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.180 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.180 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1812090626>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1812090626>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.180 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.180 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.180 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.181 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.181 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.181 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.181 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:22:53.169892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:22:53.170914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-28T18:22:53.171979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:22:53.172698) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:22:53.173657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:22:53.174646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.182 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.183 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.183 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.184 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.184 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.184 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.184 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.184 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.184 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.184 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.185 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.185 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.185 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.185 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.185 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.185 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.185 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.read.requests volume: 1090 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 15 DEBUG ceilometer.compute.pollsters [-] 6b358f92-75c9-4c1b-8a5c-733f8ded1782/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:22:53.175393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:22:53.176411) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:22:53.177448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:22:53.178917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-28T18:22:53.179977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:22:53.181075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:22:53.182839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:22:53.184421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.186 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.187 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.187 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.187 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:22:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:22:53.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:22:53.185638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:22:53 compute-0 nova_compute[189296]: 2025-11-28 18:22:53.732 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:55 compute-0 nova_compute[189296]: 2025-11-28 18:22:55.093 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:56 compute-0 podman[251956]: 2025-11-28 18:22:56.066790699 +0000 UTC m=+0.102260933 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 28 18:22:56 compute-0 podman[251955]: 2025-11-28 18:22:56.072382066 +0000 UTC m=+0.128096840 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:22:56 compute-0 podman[251962]: 2025-11-28 18:22:56.105637317 +0000 UTC m=+0.132492189 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=)
Nov 28 18:22:56 compute-0 podman[251963]: 2025-11-28 18:22:56.127334481 +0000 UTC m=+0.138422054 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Nov 28 18:22:58 compute-0 nova_compute[189296]: 2025-11-28 18:22:58.734 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:22:59 compute-0 podman[252028]: 2025-11-28 18:22:59.064631271 +0000 UTC m=+0.125123747 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 28 18:22:59 compute-0 podman[203494]: time="2025-11-28T18:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:22:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30755 "" "Go-http-client/1.1"
Nov 28 18:22:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5256 "" "Go-http-client/1.1"
Nov 28 18:23:00 compute-0 nova_compute[189296]: 2025-11-28 18:23:00.098 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:01 compute-0 openstack_network_exporter[205632]: ERROR   18:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:23:01 compute-0 openstack_network_exporter[205632]: ERROR   18:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:23:01 compute-0 openstack_network_exporter[205632]: ERROR   18:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:23:01 compute-0 openstack_network_exporter[205632]: ERROR   18:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:23:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:23:01 compute-0 openstack_network_exporter[205632]: ERROR   18:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:23:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:23:03 compute-0 nova_compute[189296]: 2025-11-28 18:23:03.736 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:05 compute-0 nova_compute[189296]: 2025-11-28 18:23:05.103 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:06.581 106729 DEBUG eventlet.wsgi.server [-] (106729) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 28 18:23:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:06.582 106729 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Nov 28 18:23:06 compute-0 ovn_metadata_agent[106619]: Accept: */*#015
Nov 28 18:23:06 compute-0 ovn_metadata_agent[106619]: Connection: close#015
Nov 28 18:23:06 compute-0 ovn_metadata_agent[106619]: Content-Type: text/plain#015
Nov 28 18:23:06 compute-0 ovn_metadata_agent[106619]: Host: 169.254.169.254#015
Nov 28 18:23:06 compute-0 ovn_metadata_agent[106619]: User-Agent: curl/7.84.0#015
Nov 28 18:23:06 compute-0 ovn_metadata_agent[106619]: X-Forwarded-For: 10.100.0.5#015
Nov 28 18:23:06 compute-0 ovn_metadata_agent[106619]: X-Ovn-Network-Id: ec1293c7-fc62-4fad-8363-d05beea77f1d __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 28 18:23:07 compute-0 podman[252054]: 2025-11-28 18:23:07.046485499 +0000 UTC m=+0.105297708 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:08.183 106729 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:08.184 106729 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.6016741#033[00m
Nov 28 18:23:08 compute-0 haproxy-metadata-proxy-ec1293c7-fc62-4fad-8363-d05beea77f1d[251506]: 10.100.0.5:51650 [28/Nov/2025:18:23:06.579] listener listener/metadata 0/0/0/1604/1604 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:08.258 106729 DEBUG eventlet.wsgi.server [-] (106729) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:08.259 106729 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: Accept: */*#015
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: Connection: close#015
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: Content-Length: 100#015
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: Content-Type: application/x-www-form-urlencoded#015
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: Host: 169.254.169.254#015
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: User-Agent: curl/7.84.0#015
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: X-Forwarded-For: 10.100.0.5#015
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: X-Ovn-Network-Id: ec1293c7-fc62-4fad-8363-d05beea77f1d#015
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: #015
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:08.509 106729 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 28 18:23:08 compute-0 haproxy-metadata-proxy-ec1293c7-fc62-4fad-8363-d05beea77f1d[251506]: 10.100.0.5:51652 [28/Nov/2025:18:23:08.257] listener listener/metadata 0/0/0/253/253 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 28 18:23:08 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:08.514 106729 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2515023#033[00m
Nov 28 18:23:08 compute-0 nova_compute[189296]: 2025-11-28 18:23:08.738 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.105 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.714 189300 DEBUG oslo_concurrency.lockutils [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquiring lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.716 189300 DEBUG oslo_concurrency.lockutils [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.717 189300 DEBUG oslo_concurrency.lockutils [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquiring lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.718 189300 DEBUG oslo_concurrency.lockutils [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.719 189300 DEBUG oslo_concurrency.lockutils [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.722 189300 INFO nova.compute.manager [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Terminating instance#033[00m
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.724 189300 DEBUG nova.compute.manager [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:23:10 compute-0 kernel: tapcc026db1-bd (unregistering): left promiscuous mode
Nov 28 18:23:10 compute-0 NetworkManager[56307]: <info>  [1764354190.7681] device (tapcc026db1-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:23:10 compute-0 ovn_controller[97771]: 2025-11-28T18:23:10Z|00173|binding|INFO|Releasing lport cc026db1-bd40-49d3-8cc6-fd774decc303 from this chassis (sb_readonly=0)
Nov 28 18:23:10 compute-0 ovn_controller[97771]: 2025-11-28T18:23:10Z|00174|binding|INFO|Setting lport cc026db1-bd40-49d3-8cc6-fd774decc303 down in Southbound
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.774 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:10 compute-0 ovn_controller[97771]: 2025-11-28T18:23:10Z|00175|binding|INFO|Removing iface tapcc026db1-bd ovn-installed in OVS
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.780 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:10 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:10.788 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:73:7d 10.100.0.5'], port_security=['fa:16:3e:ca:73:7d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6b358f92-75c9-4c1b-8a5c-733f8ded1782', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec1293c7-fc62-4fad-8363-d05beea77f1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d450a53bb64bd7b153b2c9c627f3c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ae52902-3d4c-4c2b-9227-2708d93eb132 b9ef8706-c336-4710-abcc-5ba43506f30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c397d12-2b6f-4f0c-a9d3-8b717254aec4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=cc026db1-bd40-49d3-8cc6-fd774decc303) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:23:10 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:10.790 106624 INFO neutron.agent.ovn.metadata.agent [-] Port cc026db1-bd40-49d3-8cc6-fd774decc303 in datapath ec1293c7-fc62-4fad-8363-d05beea77f1d unbound from our chassis#033[00m
Nov 28 18:23:10 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:10.791 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ec1293c7-fc62-4fad-8363-d05beea77f1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:23:10 compute-0 nova_compute[189296]: 2025-11-28 18:23:10.794 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:10 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:10.793 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e2d3a8-e233-43be-91cc-8cf27cbca350]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:23:10 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:10.796 106624 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d namespace which is not needed anymore#033[00m
Nov 28 18:23:10 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 28 18:23:10 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 39.339s CPU time.
Nov 28 18:23:10 compute-0 systemd-machined[155703]: Machine qemu-15-instance-0000000e terminated.
Nov 28 18:23:10 compute-0 NetworkManager[56307]: <info>  [1764354190.9496] manager: (tapcc026db1-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Nov 28 18:23:10 compute-0 neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d[251500]: [NOTICE]   (251504) : haproxy version is 2.8.14-c23fe91
Nov 28 18:23:10 compute-0 neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d[251500]: [NOTICE]   (251504) : path to executable is /usr/sbin/haproxy
Nov 28 18:23:10 compute-0 neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d[251500]: [WARNING]  (251504) : Exiting Master process...
Nov 28 18:23:10 compute-0 neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d[251500]: [ALERT]    (251504) : Current worker (251506) exited with code 143 (Terminated)
Nov 28 18:23:10 compute-0 neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d[251500]: [WARNING]  (251504) : All workers exited. Exiting... (0)
Nov 28 18:23:10 compute-0 systemd[1]: libpod-952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b.scope: Deactivated successfully.
Nov 28 18:23:11 compute-0 podman[252099]: 2025-11-28 18:23:11.002984484 +0000 UTC m=+0.067076115 container died 952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.010 189300 INFO nova.virt.libvirt.driver [-] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Instance destroyed successfully.#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.011 189300 DEBUG nova.objects.instance [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lazy-loading 'resources' on Instance uuid 6b358f92-75c9-4c1b-8a5c-733f8ded1782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.025 189300 DEBUG nova.virt.libvirt.vif [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:21:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1812090626',display_name='tempest-TestServerBasicOps-server-1812090626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1812090626',id=14,image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBIL3uAOGTo+nMzP3wX27O/PfnsMUHgfu5KskMbB7er4XF35b7mwr0mDblM+CV5ci+6ML/mzE/9nnMD4AGEKYgiWIXSD818xQQvavqp95iXvEMVe2GYwVCN2yCC59qi26A==',key_name='tempest-TestServerBasicOps-1283283664',keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:21:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b1d450a53bb64bd7b153b2c9c627f3c1',ramdisk_id='',reservation_id='r-wasghw9p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ffec9e61-65fb-46ae-8d34-338639229ec3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-640022481',owner_user_name='tempest-TestServerBasicOps-640022481-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:23:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7197aa467f2241e2a95a2fc057f4d01c',uuid=6b358f92-75c9-4c1b-8a5c-733f8ded1782,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.025 189300 DEBUG nova.network.os_vif_util [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Converting VIF {"id": "cc026db1-bd40-49d3-8cc6-fd774decc303", "address": "fa:16:3e:ca:73:7d", "network": {"id": "ec1293c7-fc62-4fad-8363-d05beea77f1d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-9270562-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1d450a53bb64bd7b153b2c9c627f3c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc026db1-bd", "ovs_interfaceid": "cc026db1-bd40-49d3-8cc6-fd774decc303", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.027 189300 DEBUG nova.network.os_vif_util [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ca:73:7d,bridge_name='br-int',has_traffic_filtering=True,id=cc026db1-bd40-49d3-8cc6-fd774decc303,network=Network(ec1293c7-fc62-4fad-8363-d05beea77f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc026db1-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.028 189300 DEBUG os_vif [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:73:7d,bridge_name='br-int',has_traffic_filtering=True,id=cc026db1-bd40-49d3-8cc6-fd774decc303,network=Network(ec1293c7-fc62-4fad-8363-d05beea77f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc026db1-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.030 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.031 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc026db1-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.033 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.034 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.037 189300 INFO os_vif [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:73:7d,bridge_name='br-int',has_traffic_filtering=True,id=cc026db1-bd40-49d3-8cc6-fd774decc303,network=Network(ec1293c7-fc62-4fad-8363-d05beea77f1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc026db1-bd')#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.038 189300 INFO nova.virt.libvirt.driver [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Deleting instance files /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782_del#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.039 189300 INFO nova.virt.libvirt.driver [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Deletion of /var/lib/nova/instances/6b358f92-75c9-4c1b-8a5c-733f8ded1782_del complete#033[00m
Nov 28 18:23:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b-userdata-shm.mount: Deactivated successfully.
Nov 28 18:23:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-60ed904e887392780f5578d090095ae5b4b882d1845c74e9eb7198835b95b48e-merged.mount: Deactivated successfully.
Nov 28 18:23:11 compute-0 podman[252099]: 2025-11-28 18:23:11.05231715 +0000 UTC m=+0.116408781 container cleanup 952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:23:11 compute-0 systemd[1]: libpod-conmon-952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b.scope: Deactivated successfully.
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.091 189300 INFO nova.compute.manager [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.092 189300 DEBUG oslo.service.loopingcall [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.093 189300 DEBUG nova.compute.manager [-] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.094 189300 DEBUG nova.network.neutron [-] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:23:11 compute-0 podman[252144]: 2025-11-28 18:23:11.125788212 +0000 UTC m=+0.047647116 container remove 952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:23:11 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:11.133 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d0ac1a-13a6-4cc2-bfe2-039b40b80dec]: (4, ('Fri Nov 28 06:23:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d (952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b)\n952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b\nFri Nov 28 06:23:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d (952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b)\n952d003d55eaa62f50e008fde202edb9be27e15f24a5c9c759582b24123d176b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:23:11 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:11.135 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[d8078e0c-fa55-492d-982c-7d4aac95d915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:23:11 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:11.136 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec1293c7-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:23:11 compute-0 kernel: tapec1293c7-f0: left promiscuous mode
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.137 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.150 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.151 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:11 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:11.154 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[230e4ed8-d68e-4104-8221-1f5d5e5add85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:23:11 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:11.170 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[310ac72f-f5e7-4316-809d-637bbdd658ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:23:11 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:11.171 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e65ea99b-2250-49eb-ad07-bab82c7139b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:23:11 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:11.186 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[33eb9381-3388-463f-bab0-e5a9802c50fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526321, 'reachable_time': 25788, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252158, 'error': None, 'target': 'ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:23:11 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:11.189 106734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ec1293c7-fc62-4fad-8363-d05beea77f1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 28 18:23:11 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:11.189 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[3b9e9d06-693e-4954-8e50-c7f1136fa5d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:23:11 compute-0 systemd[1]: run-netns-ovnmeta\x2dec1293c7\x2dfc62\x2d4fad\x2d8363\x2dd05beea77f1d.mount: Deactivated successfully.
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.263 189300 DEBUG nova.compute.manager [req-060be6fb-5d10-47e4-95ca-fcb71bb1f73d req-9cedcf46-6439-4a98-8925-5202437c5434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Received event network-vif-unplugged-cc026db1-bd40-49d3-8cc6-fd774decc303 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.263 189300 DEBUG oslo_concurrency.lockutils [req-060be6fb-5d10-47e4-95ca-fcb71bb1f73d req-9cedcf46-6439-4a98-8925-5202437c5434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.263 189300 DEBUG oslo_concurrency.lockutils [req-060be6fb-5d10-47e4-95ca-fcb71bb1f73d req-9cedcf46-6439-4a98-8925-5202437c5434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.263 189300 DEBUG oslo_concurrency.lockutils [req-060be6fb-5d10-47e4-95ca-fcb71bb1f73d req-9cedcf46-6439-4a98-8925-5202437c5434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.263 189300 DEBUG nova.compute.manager [req-060be6fb-5d10-47e4-95ca-fcb71bb1f73d req-9cedcf46-6439-4a98-8925-5202437c5434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] No waiting events found dispatching network-vif-unplugged-cc026db1-bd40-49d3-8cc6-fd774decc303 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.264 189300 DEBUG nova.compute.manager [req-060be6fb-5d10-47e4-95ca-fcb71bb1f73d req-9cedcf46-6439-4a98-8925-5202437c5434 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Received event network-vif-unplugged-cc026db1-bd40-49d3-8cc6-fd774decc303 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:23:11 compute-0 nova_compute[189296]: 2025-11-28 18:23:11.989 189300 DEBUG nova.network.neutron [-] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:23:12 compute-0 nova_compute[189296]: 2025-11-28 18:23:12.005 189300 INFO nova.compute.manager [-] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Took 0.91 seconds to deallocate network for instance.#033[00m
Nov 28 18:23:12 compute-0 nova_compute[189296]: 2025-11-28 18:23:12.040 189300 DEBUG oslo_concurrency.lockutils [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:23:12 compute-0 nova_compute[189296]: 2025-11-28 18:23:12.041 189300 DEBUG oslo_concurrency.lockutils [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:23:12 compute-0 nova_compute[189296]: 2025-11-28 18:23:12.105 189300 DEBUG nova.compute.provider_tree [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:23:12 compute-0 nova_compute[189296]: 2025-11-28 18:23:12.132 189300 DEBUG nova.scheduler.client.report [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:23:12 compute-0 nova_compute[189296]: 2025-11-28 18:23:12.161 189300 DEBUG oslo_concurrency.lockutils [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:23:12 compute-0 nova_compute[189296]: 2025-11-28 18:23:12.185 189300 INFO nova.scheduler.client.report [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Deleted allocations for instance 6b358f92-75c9-4c1b-8a5c-733f8ded1782#033[00m
Nov 28 18:23:12 compute-0 nova_compute[189296]: 2025-11-28 18:23:12.285 189300 DEBUG oslo_concurrency.lockutils [None req-f7efbdb4-e81a-433c-8be8-4aec1c4abf83 7197aa467f2241e2a95a2fc057f4d01c b1d450a53bb64bd7b153b2c9c627f3c1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:23:13 compute-0 nova_compute[189296]: 2025-11-28 18:23:13.368 189300 DEBUG nova.compute.manager [req-79594cb3-a4bb-483d-a50c-3b273fd11ea1 req-7491538a-6459-4e2d-bf3b-1f2a9c4fd944 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Received event network-vif-plugged-cc026db1-bd40-49d3-8cc6-fd774decc303 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:23:13 compute-0 nova_compute[189296]: 2025-11-28 18:23:13.369 189300 DEBUG oslo_concurrency.lockutils [req-79594cb3-a4bb-483d-a50c-3b273fd11ea1 req-7491538a-6459-4e2d-bf3b-1f2a9c4fd944 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:23:13 compute-0 nova_compute[189296]: 2025-11-28 18:23:13.369 189300 DEBUG oslo_concurrency.lockutils [req-79594cb3-a4bb-483d-a50c-3b273fd11ea1 req-7491538a-6459-4e2d-bf3b-1f2a9c4fd944 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:23:13 compute-0 nova_compute[189296]: 2025-11-28 18:23:13.370 189300 DEBUG oslo_concurrency.lockutils [req-79594cb3-a4bb-483d-a50c-3b273fd11ea1 req-7491538a-6459-4e2d-bf3b-1f2a9c4fd944 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "6b358f92-75c9-4c1b-8a5c-733f8ded1782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:23:13 compute-0 nova_compute[189296]: 2025-11-28 18:23:13.370 189300 DEBUG nova.compute.manager [req-79594cb3-a4bb-483d-a50c-3b273fd11ea1 req-7491538a-6459-4e2d-bf3b-1f2a9c4fd944 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] No waiting events found dispatching network-vif-plugged-cc026db1-bd40-49d3-8cc6-fd774decc303 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:23:13 compute-0 nova_compute[189296]: 2025-11-28 18:23:13.371 189300 WARNING nova.compute.manager [req-79594cb3-a4bb-483d-a50c-3b273fd11ea1 req-7491538a-6459-4e2d-bf3b-1f2a9c4fd944 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Received unexpected event network-vif-plugged-cc026db1-bd40-49d3-8cc6-fd774decc303 for instance with vm_state deleted and task_state None.#033[00m
Nov 28 18:23:13 compute-0 nova_compute[189296]: 2025-11-28 18:23:13.371 189300 DEBUG nova.compute.manager [req-79594cb3-a4bb-483d-a50c-3b273fd11ea1 req-7491538a-6459-4e2d-bf3b-1f2a9c4fd944 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Received event network-vif-deleted-cc026db1-bd40-49d3-8cc6-fd774decc303 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:23:13 compute-0 nova_compute[189296]: 2025-11-28 18:23:13.739 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:14 compute-0 nova_compute[189296]: 2025-11-28 18:23:14.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:16 compute-0 nova_compute[189296]: 2025-11-28 18:23:16.035 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:16 compute-0 nova_compute[189296]: 2025-11-28 18:23:16.642 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:18 compute-0 nova_compute[189296]: 2025-11-28 18:23:18.742 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:19 compute-0 nova_compute[189296]: 2025-11-28 18:23:19.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:20 compute-0 podman[252161]: 2025-11-28 18:23:20.029451993 +0000 UTC m=+0.082545237 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Nov 28 18:23:20 compute-0 podman[252160]: 2025-11-28 18:23:20.040092306 +0000 UTC m=+0.085498970 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., version=9.6, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 28 18:23:20 compute-0 podman[252162]: 2025-11-28 18:23:20.05447775 +0000 UTC m=+0.097617027 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 28 18:23:20 compute-0 ovn_controller[97771]: 2025-11-28T18:23:20Z|00176|binding|INFO|Releasing lport 29b269a8-673c-48a9-bc1f-c180355b2c1b from this chassis (sb_readonly=0)
Nov 28 18:23:20 compute-0 nova_compute[189296]: 2025-11-28 18:23:20.192 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:20 compute-0 ovn_controller[97771]: 2025-11-28T18:23:20Z|00177|binding|INFO|Releasing lport 29b269a8-673c-48a9-bc1f-c180355b2c1b from this chassis (sb_readonly=0)
Nov 28 18:23:20 compute-0 nova_compute[189296]: 2025-11-28 18:23:20.453 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:21 compute-0 nova_compute[189296]: 2025-11-28 18:23:21.038 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:21 compute-0 nova_compute[189296]: 2025-11-28 18:23:21.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:21 compute-0 nova_compute[189296]: 2025-11-28 18:23:21.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:21 compute-0 nova_compute[189296]: 2025-11-28 18:23:21.624 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:23:22 compute-0 nova_compute[189296]: 2025-11-28 18:23:22.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:22 compute-0 nova_compute[189296]: 2025-11-28 18:23:22.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:23:22 compute-0 nova_compute[189296]: 2025-11-28 18:23:22.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:23:22 compute-0 nova_compute[189296]: 2025-11-28 18:23:22.953 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:23:22 compute-0 nova_compute[189296]: 2025-11-28 18:23:22.954 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:23:22 compute-0 nova_compute[189296]: 2025-11-28 18:23:22.955 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:23:22 compute-0 nova_compute[189296]: 2025-11-28 18:23:22.955 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:23:23 compute-0 nova_compute[189296]: 2025-11-28 18:23:23.744 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:26 compute-0 nova_compute[189296]: 2025-11-28 18:23:26.001 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764354190.998574, 6b358f92-75c9-4c1b-8a5c-733f8ded1782 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:23:26 compute-0 nova_compute[189296]: 2025-11-28 18:23:26.002 189300 INFO nova.compute.manager [-] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:23:26 compute-0 nova_compute[189296]: 2025-11-28 18:23:26.030 189300 DEBUG nova.compute.manager [None req-d83b56ad-ba2f-4512-ba8f-49b030c2facd - - - - - -] [instance: 6b358f92-75c9-4c1b-8a5c-733f8ded1782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:23:26 compute-0 nova_compute[189296]: 2025-11-28 18:23:26.040 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:26 compute-0 nova_compute[189296]: 2025-11-28 18:23:26.402 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:23:26 compute-0 nova_compute[189296]: 2025-11-28 18:23:26.426 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:23:26 compute-0 nova_compute[189296]: 2025-11-28 18:23:26.427 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:23:26 compute-0 nova_compute[189296]: 2025-11-28 18:23:26.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:26 compute-0 nova_compute[189296]: 2025-11-28 18:23:26.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 28 18:23:27 compute-0 podman[252218]: 2025-11-28 18:23:27.034251705 +0000 UTC m=+0.079899831 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, version=9.4, name=ubi9, release-0.7.12=, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Nov 28 18:23:27 compute-0 podman[252216]: 2025-11-28 18:23:27.04171451 +0000 UTC m=+0.100667514 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:23:27 compute-0 podman[252224]: 2025-11-28 18:23:27.047506432 +0000 UTC m=+0.081626074 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2)
Nov 28 18:23:27 compute-0 podman[252217]: 2025-11-28 18:23:27.049336797 +0000 UTC m=+0.101882534 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 28 18:23:27 compute-0 nova_compute[189296]: 2025-11-28 18:23:27.640 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:27 compute-0 nova_compute[189296]: 2025-11-28 18:23:27.681 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:23:27 compute-0 nova_compute[189296]: 2025-11-28 18:23:27.682 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:23:27 compute-0 nova_compute[189296]: 2025-11-28 18:23:27.683 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:23:27 compute-0 nova_compute[189296]: 2025-11-28 18:23:27.684 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:23:27 compute-0 nova_compute[189296]: 2025-11-28 18:23:27.792 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:23:27 compute-0 nova_compute[189296]: 2025-11-28 18:23:27.897 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:23:27 compute-0 nova_compute[189296]: 2025-11-28 18:23:27.898 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:23:27 compute-0 nova_compute[189296]: 2025-11-28 18:23:27.967 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.320 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.322 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5139MB free_disk=72.27790832519531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.322 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.323 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.521 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.522 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.522 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.614 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.648 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.672 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.674 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.675 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.676 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.690 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 28 18:23:28 compute-0 nova_compute[189296]: 2025-11-28 18:23:28.748 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:29 compute-0 nova_compute[189296]: 2025-11-28 18:23:29.676 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:29 compute-0 nova_compute[189296]: 2025-11-28 18:23:29.677 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:29 compute-0 podman[203494]: time="2025-11-28T18:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:23:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:23:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4789 "" "Go-http-client/1.1"
Nov 28 18:23:30 compute-0 podman[252304]: 2025-11-28 18:23:30.196149574 +0000 UTC m=+0.118638687 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:23:31 compute-0 nova_compute[189296]: 2025-11-28 18:23:31.044 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:31 compute-0 openstack_network_exporter[205632]: ERROR   18:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:23:31 compute-0 openstack_network_exporter[205632]: ERROR   18:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:23:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:23:31 compute-0 openstack_network_exporter[205632]: ERROR   18:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:23:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:23:31 compute-0 openstack_network_exporter[205632]: ERROR   18:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:23:31 compute-0 openstack_network_exporter[205632]: ERROR   18:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:23:32 compute-0 nova_compute[189296]: 2025-11-28 18:23:32.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:23:33 compute-0 nova_compute[189296]: 2025-11-28 18:23:33.750 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:36 compute-0 nova_compute[189296]: 2025-11-28 18:23:36.049 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:38 compute-0 podman[252328]: 2025-11-28 18:23:38.011868824 +0000 UTC m=+0.078282502 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:23:38 compute-0 nova_compute[189296]: 2025-11-28 18:23:38.750 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:41 compute-0 nova_compute[189296]: 2025-11-28 18:23:41.054 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:42 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:42.933 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:23:42 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:42.934 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:23:42 compute-0 nova_compute[189296]: 2025-11-28 18:23:42.936 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:43 compute-0 nova_compute[189296]: 2025-11-28 18:23:43.756 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:46 compute-0 nova_compute[189296]: 2025-11-28 18:23:46.058 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:48 compute-0 nova_compute[189296]: 2025-11-28 18:23:48.757 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:48 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:48.936 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:23:51 compute-0 podman[252361]: 2025-11-28 18:23:51.013653142 +0000 UTC m=+0.079196774 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Nov 28 18:23:51 compute-0 podman[252363]: 2025-11-28 18:23:51.019809874 +0000 UTC m=+0.071613697 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:23:51 compute-0 podman[252362]: 2025-11-28 18:23:51.025785231 +0000 UTC m=+0.087185431 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 28 18:23:51 compute-0 nova_compute[189296]: 2025-11-28 18:23:51.061 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:52.635 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:23:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:52.637 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:23:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:23:52.638 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:23:53 compute-0 nova_compute[189296]: 2025-11-28 18:23:53.760 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:56 compute-0 nova_compute[189296]: 2025-11-28 18:23:56.064 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:58 compute-0 podman[252415]: 2025-11-28 18:23:58.017343497 +0000 UTC m=+0.074740457 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:23:58 compute-0 podman[252416]: 2025-11-28 18:23:58.048786924 +0000 UTC m=+0.089292480 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 28 18:23:58 compute-0 podman[252417]: 2025-11-28 18:23:58.096356155 +0000 UTC m=+0.136262907 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 28 18:23:58 compute-0 podman[252423]: 2025-11-28 18:23:58.103930881 +0000 UTC m=+0.134478164 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 28 18:23:58 compute-0 nova_compute[189296]: 2025-11-28 18:23:58.762 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:23:59 compute-0 podman[203494]: time="2025-11-28T18:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:23:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:23:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 28 18:24:01 compute-0 nova_compute[189296]: 2025-11-28 18:24:01.066 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:01 compute-0 podman[252494]: 2025-11-28 18:24:01.067780256 +0000 UTC m=+0.119212632 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:24:01 compute-0 openstack_network_exporter[205632]: ERROR   18:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:24:01 compute-0 openstack_network_exporter[205632]: ERROR   18:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:24:01 compute-0 openstack_network_exporter[205632]: ERROR   18:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:24:01 compute-0 openstack_network_exporter[205632]: ERROR   18:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:24:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:24:01 compute-0 openstack_network_exporter[205632]: ERROR   18:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:24:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:24:03 compute-0 nova_compute[189296]: 2025-11-28 18:24:03.764 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:06 compute-0 nova_compute[189296]: 2025-11-28 18:24:06.071 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:06 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 28 18:24:08 compute-0 nova_compute[189296]: 2025-11-28 18:24:08.766 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:09 compute-0 podman[252520]: 2025-11-28 18:24:09.006358668 +0000 UTC m=+0.064368823 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:24:11 compute-0 nova_compute[189296]: 2025-11-28 18:24:11.075 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:13 compute-0 nova_compute[189296]: 2025-11-28 18:24:13.769 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:16 compute-0 nova_compute[189296]: 2025-11-28 18:24:16.080 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:17 compute-0 nova_compute[189296]: 2025-11-28 18:24:17.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:24:18 compute-0 nova_compute[189296]: 2025-11-28 18:24:18.775 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:20 compute-0 nova_compute[189296]: 2025-11-28 18:24:20.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:24:21 compute-0 nova_compute[189296]: 2025-11-28 18:24:21.086 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:21 compute-0 nova_compute[189296]: 2025-11-28 18:24:21.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:24:21 compute-0 nova_compute[189296]: 2025-11-28 18:24:21.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:24:22 compute-0 podman[252546]: 2025-11-28 18:24:22.039909292 +0000 UTC m=+0.092533590 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2)
Nov 28 18:24:22 compute-0 podman[252545]: 2025-11-28 18:24:22.042970476 +0000 UTC m=+0.093120483 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Nov 28 18:24:22 compute-0 podman[252547]: 2025-11-28 18:24:22.077650143 +0000 UTC m=+0.109165536 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:24:22 compute-0 nova_compute[189296]: 2025-11-28 18:24:22.628 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:24:22 compute-0 nova_compute[189296]: 2025-11-28 18:24:22.628 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:24:22 compute-0 nova_compute[189296]: 2025-11-28 18:24:22.628 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:24:22 compute-0 nova_compute[189296]: 2025-11-28 18:24:22.967 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:24:22 compute-0 nova_compute[189296]: 2025-11-28 18:24:22.967 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:24:22 compute-0 nova_compute[189296]: 2025-11-28 18:24:22.968 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:24:22 compute-0 nova_compute[189296]: 2025-11-28 18:24:22.968 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:24:23 compute-0 nova_compute[189296]: 2025-11-28 18:24:23.774 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:25 compute-0 nova_compute[189296]: 2025-11-28 18:24:25.184 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:24:25 compute-0 nova_compute[189296]: 2025-11-28 18:24:25.208 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:24:25 compute-0 nova_compute[189296]: 2025-11-28 18:24:25.209 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:24:25 compute-0 nova_compute[189296]: 2025-11-28 18:24:25.210 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:24:26 compute-0 nova_compute[189296]: 2025-11-28 18:24:26.089 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:28 compute-0 nova_compute[189296]: 2025-11-28 18:24:28.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:24:28 compute-0 nova_compute[189296]: 2025-11-28 18:24:28.776 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:28 compute-0 nova_compute[189296]: 2025-11-28 18:24:28.889 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:24:28 compute-0 nova_compute[189296]: 2025-11-28 18:24:28.890 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:24:28 compute-0 nova_compute[189296]: 2025-11-28 18:24:28.891 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:24:28 compute-0 nova_compute[189296]: 2025-11-28 18:24:28.891 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.019 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:24:29 compute-0 podman[252599]: 2025-11-28 18:24:29.020594739 +0000 UTC m=+0.080831134 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:24:29 compute-0 podman[252600]: 2025-11-28 18:24:29.044698078 +0000 UTC m=+0.098270180 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:24:29 compute-0 podman[252601]: 2025-11-28 18:24:29.07019879 +0000 UTC m=+0.117040068 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, architecture=x86_64, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 28 18:24:29 compute-0 podman[252602]: 2025-11-28 18:24:29.082342367 +0000 UTC m=+0.125147656 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.097 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.098 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.158 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.488 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.489 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5116MB free_disk=72.27792739868164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.490 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.490 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.573 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.574 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.574 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:24:29 compute-0 podman[203494]: time="2025-11-28T18:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:24:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:24:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.792 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.806 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.809 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:24:29 compute-0 nova_compute[189296]: 2025-11-28 18:24:29.809 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:24:30 compute-0 ovn_controller[97771]: 2025-11-28T18:24:30Z|00178|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Nov 28 18:24:31 compute-0 nova_compute[189296]: 2025-11-28 18:24:31.093 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:31 compute-0 openstack_network_exporter[205632]: ERROR   18:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:24:31 compute-0 openstack_network_exporter[205632]: ERROR   18:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:24:31 compute-0 openstack_network_exporter[205632]: ERROR   18:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:24:31 compute-0 openstack_network_exporter[205632]: ERROR   18:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:24:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:24:31 compute-0 openstack_network_exporter[205632]: ERROR   18:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:24:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:24:31 compute-0 nova_compute[189296]: 2025-11-28 18:24:31.810 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:24:31 compute-0 nova_compute[189296]: 2025-11-28 18:24:31.811 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:24:32 compute-0 podman[252683]: 2025-11-28 18:24:32.035834079 +0000 UTC m=+0.099442739 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 28 18:24:32 compute-0 nova_compute[189296]: 2025-11-28 18:24:32.628 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:24:33 compute-0 nova_compute[189296]: 2025-11-28 18:24:33.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:24:33 compute-0 nova_compute[189296]: 2025-11-28 18:24:33.782 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:36 compute-0 nova_compute[189296]: 2025-11-28 18:24:36.096 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:38 compute-0 nova_compute[189296]: 2025-11-28 18:24:38.781 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:40 compute-0 podman[252708]: 2025-11-28 18:24:40.018531699 +0000 UTC m=+0.074984722 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:24:41 compute-0 nova_compute[189296]: 2025-11-28 18:24:41.100 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:43 compute-0 nova_compute[189296]: 2025-11-28 18:24:43.784 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:46 compute-0 nova_compute[189296]: 2025-11-28 18:24:46.106 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:48 compute-0 nova_compute[189296]: 2025-11-28 18:24:48.790 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:51 compute-0 nova_compute[189296]: 2025-11-28 18:24:51.109 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.987 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.988 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.988 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.989 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.990 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.999 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '200bd8bc-d121-4a86-b728-ea98aac95adf', 'name': 'te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:51.999 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.000 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.000 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.000 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.002 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:24:52.000762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.026 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.027 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.028 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.028 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.028 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.028 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.029 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.029 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:24:52.029549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.085 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.085 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.086 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.086 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.087 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.087 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.087 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.087 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.087 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 562549638 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.088 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 45170226 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.089 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.089 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.089 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.089 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.090 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:24:52.087543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.090 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:24:52.090225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.094 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.095 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.095 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.096 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.096 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.096 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.096 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:24:52.096660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.119 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/memory.usage volume: 43.58984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.120 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.120 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.121 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.121 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.121 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.121 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.121 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.122 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:24:52.121481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.122 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.122 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.122 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.122 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.122 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.122 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.123 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.123 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.123 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.123 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.124 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.124 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.124 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.124 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.124 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.124 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.124 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.125 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.125 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.125 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.125 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.125 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 2362907010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.125 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.126 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.126 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.126 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.126 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.126 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.126 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.126 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.127 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.127 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.127 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.127 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.128 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.128 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.128 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.128 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.128 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.128 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.128 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.129 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.129 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.129 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.129 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.129 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:24:52.122939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:24:52.124288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:24:52.125437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:24:52.126832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:24:52.128200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/cpu volume: 164130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:24:52.129209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:24:52.130288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.130 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.131 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.131 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.131 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.131 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.131 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.131 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.131 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.131 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.131 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.132 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.132 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.132 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.132 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.132 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.132 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.132 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.132 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.132 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.133 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.133 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.133 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.133 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.133 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.133 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.133 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.133 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.134 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.135 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.135 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.135 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.135 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.135 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:24:52.131440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.136 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.137 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.137 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.137 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.137 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.137 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.137 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.137 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.137 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.137 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.138 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.138 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.138 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.138 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.138 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.138 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.138 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.139 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.139 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.139 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:24:52.132054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.139 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.139 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.139 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.139 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.140 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:24:52.132915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.140 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:24:52.133582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.140 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.141 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:24:52.134323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:24:52.135031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:24:52.136155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:24:52.137032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:24:52.137750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:24:52.138472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:24:52.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:24:52.139309) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:24:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:24:52.637 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:24:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:24:52.637 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:24:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:24:52.638 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:24:53 compute-0 podman[252734]: 2025-11-28 18:24:53.023209278 +0000 UTC m=+0.072674385 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 18:24:53 compute-0 podman[252737]: 2025-11-28 18:24:53.034823621 +0000 UTC m=+0.081053170 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 28 18:24:53 compute-0 podman[252733]: 2025-11-28 18:24:53.049793657 +0000 UTC m=+0.109050973 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git, maintainer=Red Hat, Inc., distribution-scope=public)
Nov 28 18:24:53 compute-0 nova_compute[189296]: 2025-11-28 18:24:53.791 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:56 compute-0 nova_compute[189296]: 2025-11-28 18:24:56.112 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:58 compute-0 nova_compute[189296]: 2025-11-28 18:24:58.801 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:24:59 compute-0 podman[203494]: time="2025-11-28T18:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:24:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:24:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Nov 28 18:25:00 compute-0 podman[252792]: 2025-11-28 18:25:00.018531572 +0000 UTC m=+0.076353205 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:25:00 compute-0 podman[252793]: 2025-11-28 18:25:00.020290675 +0000 UTC m=+0.072457100 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 28 18:25:00 compute-0 podman[252799]: 2025-11-28 18:25:00.042876316 +0000 UTC m=+0.087097947 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:25:00 compute-0 podman[252794]: 2025-11-28 18:25:00.085299673 +0000 UTC m=+0.131747398 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, io.openshift.expose-services=, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 28 18:25:01 compute-0 nova_compute[189296]: 2025-11-28 18:25:01.116 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:01 compute-0 openstack_network_exporter[205632]: ERROR   18:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:25:01 compute-0 openstack_network_exporter[205632]: ERROR   18:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:25:01 compute-0 openstack_network_exporter[205632]: ERROR   18:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:25:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:25:01 compute-0 openstack_network_exporter[205632]: ERROR   18:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:25:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:25:01 compute-0 openstack_network_exporter[205632]: ERROR   18:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:25:03 compute-0 podman[252867]: 2025-11-28 18:25:03.062522813 +0000 UTC m=+0.119690223 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:25:03 compute-0 nova_compute[189296]: 2025-11-28 18:25:03.805 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:06 compute-0 nova_compute[189296]: 2025-11-28 18:25:06.119 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:08 compute-0 nova_compute[189296]: 2025-11-28 18:25:08.810 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:11 compute-0 podman[252891]: 2025-11-28 18:25:11.016828538 +0000 UTC m=+0.067804205 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:25:11 compute-0 nova_compute[189296]: 2025-11-28 18:25:11.124 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:13 compute-0 nova_compute[189296]: 2025-11-28 18:25:13.812 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:16 compute-0 nova_compute[189296]: 2025-11-28 18:25:16.128 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:17 compute-0 nova_compute[189296]: 2025-11-28 18:25:17.650 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:25:18 compute-0 nova_compute[189296]: 2025-11-28 18:25:18.817 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:21 compute-0 nova_compute[189296]: 2025-11-28 18:25:21.132 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:21 compute-0 nova_compute[189296]: 2025-11-28 18:25:21.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:25:21 compute-0 nova_compute[189296]: 2025-11-28 18:25:21.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:25:21 compute-0 nova_compute[189296]: 2025-11-28 18:25:21.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:25:23 compute-0 nova_compute[189296]: 2025-11-28 18:25:23.628 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:25:23 compute-0 nova_compute[189296]: 2025-11-28 18:25:23.820 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:24 compute-0 podman[252916]: 2025-11-28 18:25:24.019869548 +0000 UTC m=+0.082795412 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Nov 28 18:25:24 compute-0 podman[252917]: 2025-11-28 18:25:24.023031565 +0000 UTC m=+0.080877185 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 28 18:25:24 compute-0 podman[252918]: 2025-11-28 18:25:24.047081912 +0000 UTC m=+0.101130340 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 18:25:24 compute-0 nova_compute[189296]: 2025-11-28 18:25:24.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:25:24 compute-0 nova_compute[189296]: 2025-11-28 18:25:24.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:25:24 compute-0 nova_compute[189296]: 2025-11-28 18:25:24.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:25:25 compute-0 nova_compute[189296]: 2025-11-28 18:25:25.291 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:25:25 compute-0 nova_compute[189296]: 2025-11-28 18:25:25.292 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:25:25 compute-0 nova_compute[189296]: 2025-11-28 18:25:25.293 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:25:25 compute-0 nova_compute[189296]: 2025-11-28 18:25:25.294 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:25:26 compute-0 nova_compute[189296]: 2025-11-28 18:25:26.136 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:27 compute-0 nova_compute[189296]: 2025-11-28 18:25:27.301 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:25:27 compute-0 nova_compute[189296]: 2025-11-28 18:25:27.346 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:25:27 compute-0 nova_compute[189296]: 2025-11-28 18:25:27.347 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:25:28 compute-0 nova_compute[189296]: 2025-11-28 18:25:28.822 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:29 compute-0 podman[203494]: time="2025-11-28T18:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:25:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:25:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 28 18:25:30 compute-0 nova_compute[189296]: 2025-11-28 18:25:30.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:25:30 compute-0 nova_compute[189296]: 2025-11-28 18:25:30.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:25:30 compute-0 nova_compute[189296]: 2025-11-28 18:25:30.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:25:30 compute-0 nova_compute[189296]: 2025-11-28 18:25:30.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:25:30 compute-0 nova_compute[189296]: 2025-11-28 18:25:30.662 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:25:31 compute-0 podman[252973]: 2025-11-28 18:25:31.124216044 +0000 UTC m=+0.065252235 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.141 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:31 compute-0 podman[252974]: 2025-11-28 18:25:31.146987129 +0000 UTC m=+0.077720118 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 28 18:25:31 compute-0 podman[252975]: 2025-11-28 18:25:31.148386604 +0000 UTC m=+0.076200981 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 28 18:25:31 compute-0 podman[252976]: 2025-11-28 18:25:31.171488288 +0000 UTC m=+0.098665030 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.185 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.245 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.246 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.304 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:25:31 compute-0 openstack_network_exporter[205632]: ERROR   18:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:25:31 compute-0 openstack_network_exporter[205632]: ERROR   18:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:25:31 compute-0 openstack_network_exporter[205632]: ERROR   18:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:25:31 compute-0 openstack_network_exporter[205632]: ERROR   18:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:25:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:25:31 compute-0 openstack_network_exporter[205632]: ERROR   18:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:25:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.658 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.660 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5127MB free_disk=72.27790832519531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.660 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.661 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.933 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.933 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.933 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.978 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:25:31 compute-0 nova_compute[189296]: 2025-11-28 18:25:31.998 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:25:32 compute-0 nova_compute[189296]: 2025-11-28 18:25:32.000 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:25:32 compute-0 nova_compute[189296]: 2025-11-28 18:25:32.001 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:25:33 compute-0 nova_compute[189296]: 2025-11-28 18:25:33.002 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:25:33 compute-0 nova_compute[189296]: 2025-11-28 18:25:33.004 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:25:33 compute-0 nova_compute[189296]: 2025-11-28 18:25:33.829 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:34 compute-0 podman[253053]: 2025-11-28 18:25:34.126649321 +0000 UTC m=+0.177856593 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 28 18:25:34 compute-0 nova_compute[189296]: 2025-11-28 18:25:34.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:25:36 compute-0 nova_compute[189296]: 2025-11-28 18:25:36.145 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:38 compute-0 nova_compute[189296]: 2025-11-28 18:25:38.831 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:41 compute-0 nova_compute[189296]: 2025-11-28 18:25:41.149 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:42 compute-0 podman[253078]: 2025-11-28 18:25:42.026481338 +0000 UTC m=+0.092001627 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 18:25:43 compute-0 nova_compute[189296]: 2025-11-28 18:25:43.833 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:46 compute-0 nova_compute[189296]: 2025-11-28 18:25:46.153 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:48 compute-0 nova_compute[189296]: 2025-11-28 18:25:48.835 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:51 compute-0 nova_compute[189296]: 2025-11-28 18:25:51.157 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:25:52.638 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:25:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:25:52.638 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:25:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:25:52.639 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:25:53 compute-0 nova_compute[189296]: 2025-11-28 18:25:53.837 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:55 compute-0 podman[253104]: 2025-11-28 18:25:55.022939045 +0000 UTC m=+0.083997812 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 28 18:25:55 compute-0 podman[253105]: 2025-11-28 18:25:55.023518889 +0000 UTC m=+0.081445399 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 28 18:25:55 compute-0 podman[253103]: 2025-11-28 18:25:55.035039251 +0000 UTC m=+0.094531160 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Nov 28 18:25:56 compute-0 nova_compute[189296]: 2025-11-28 18:25:56.161 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:58 compute-0 nova_compute[189296]: 2025-11-28 18:25:58.840 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:25:59 compute-0 podman[203494]: time="2025-11-28T18:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:25:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:25:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Nov 28 18:26:01 compute-0 nova_compute[189296]: 2025-11-28 18:26:01.166 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:01 compute-0 openstack_network_exporter[205632]: ERROR   18:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:26:01 compute-0 openstack_network_exporter[205632]: ERROR   18:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:26:01 compute-0 openstack_network_exporter[205632]: ERROR   18:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:26:01 compute-0 openstack_network_exporter[205632]: ERROR   18:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:26:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:26:01 compute-0 openstack_network_exporter[205632]: ERROR   18:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:26:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:26:01 compute-0 podman[253159]: 2025-11-28 18:26:01.995135275 +0000 UTC m=+0.059157176 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:26:02 compute-0 podman[253168]: 2025-11-28 18:26:02.024209334 +0000 UTC m=+0.072332826 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 28 18:26:02 compute-0 podman[253161]: 2025-11-28 18:26:02.02484614 +0000 UTC m=+0.078133058 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Nov 28 18:26:02 compute-0 podman[253160]: 2025-11-28 18:26:02.02933714 +0000 UTC m=+0.083155201 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 18:26:03 compute-0 nova_compute[189296]: 2025-11-28 18:26:03.842 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:05 compute-0 podman[253236]: 2025-11-28 18:26:05.109804372 +0000 UTC m=+0.153844747 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 28 18:26:06 compute-0 nova_compute[189296]: 2025-11-28 18:26:06.169 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:08 compute-0 nova_compute[189296]: 2025-11-28 18:26:08.845 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.230 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.231 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.295 189300 DEBUG nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.397 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.397 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.411 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.412 189300 INFO nova.compute.claims [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.683 189300 DEBUG nova.compute.provider_tree [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.696 189300 DEBUG nova.scheduler.client.report [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.731 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.732 189300 DEBUG nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.787 189300 DEBUG nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.788 189300 DEBUG nova.network.neutron [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.806 189300 INFO nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.824 189300 DEBUG nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.905 189300 DEBUG nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.907 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.908 189300 INFO nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Creating image(s)#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.909 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "/var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.910 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "/var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.911 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "/var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:10 compute-0 nova_compute[189296]: 2025-11-28 18:26:10.928 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.025 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.027 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.028 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.044 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.101 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.103 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa,backing_fmt=raw /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.155 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa,backing_fmt=raw /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.156 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.157 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.173 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.218 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.219 189300 DEBUG nova.virt.disk.api [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Checking if we can resize image /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.220 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.296 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.297 189300 DEBUG nova.virt.disk.api [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Cannot resize image /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.298 189300 DEBUG nova.objects.instance [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lazy-loading 'migration_context' on Instance uuid bf6c3ac0-6e00-4be5-ae3a-454d022268e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.312 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.313 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Ensure instance console log exists: /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.313 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.314 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.314 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:11 compute-0 nova_compute[189296]: 2025-11-28 18:26:11.411 189300 DEBUG nova.policy [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c71a276f38f4bfebf1d3631d6f82966', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 28 18:26:13 compute-0 podman[253276]: 2025-11-28 18:26:13.044579911 +0000 UTC m=+0.088570493 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:26:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:13.458 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:26:13 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:13.460 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:26:13 compute-0 nova_compute[189296]: 2025-11-28 18:26:13.460 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:13 compute-0 nova_compute[189296]: 2025-11-28 18:26:13.848 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:14 compute-0 nova_compute[189296]: 2025-11-28 18:26:14.005 189300 DEBUG nova.network.neutron [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Successfully created port: 0a072d7e-c128-48b9-9754-327584bc3579 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 28 18:26:15 compute-0 nova_compute[189296]: 2025-11-28 18:26:15.713 189300 DEBUG nova.network.neutron [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Successfully updated port: 0a072d7e-c128-48b9-9754-327584bc3579 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 28 18:26:15 compute-0 nova_compute[189296]: 2025-11-28 18:26:15.734 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:26:15 compute-0 nova_compute[189296]: 2025-11-28 18:26:15.735 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquired lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:26:15 compute-0 nova_compute[189296]: 2025-11-28 18:26:15.735 189300 DEBUG nova.network.neutron [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 28 18:26:15 compute-0 nova_compute[189296]: 2025-11-28 18:26:15.826 189300 DEBUG nova.compute.manager [req-6e845e40-470a-4c19-9edb-e0b60a016742 req-92f9234a-b379-44a1-b283-822c9bfe7e1c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Received event network-changed-0a072d7e-c128-48b9-9754-327584bc3579 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:26:15 compute-0 nova_compute[189296]: 2025-11-28 18:26:15.827 189300 DEBUG nova.compute.manager [req-6e845e40-470a-4c19-9edb-e0b60a016742 req-92f9234a-b379-44a1-b283-822c9bfe7e1c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Refreshing instance network info cache due to event network-changed-0a072d7e-c128-48b9-9754-327584bc3579. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 28 18:26:15 compute-0 nova_compute[189296]: 2025-11-28 18:26:15.828 189300 DEBUG oslo_concurrency.lockutils [req-6e845e40-470a-4c19-9edb-e0b60a016742 req-92f9234a-b379-44a1-b283-822c9bfe7e1c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:26:16 compute-0 nova_compute[189296]: 2025-11-28 18:26:16.179 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:16 compute-0 nova_compute[189296]: 2025-11-28 18:26:16.321 189300 DEBUG nova.network.neutron [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.635 189300 DEBUG nova.network.neutron [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updating instance_info_cache with network_info: [{"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.658 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Releasing lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.659 189300 DEBUG nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Instance network_info: |[{"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.660 189300 DEBUG oslo_concurrency.lockutils [req-6e845e40-470a-4c19-9edb-e0b60a016742 req-92f9234a-b379-44a1-b283-822c9bfe7e1c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquired lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.661 189300 DEBUG nova.network.neutron [req-6e845e40-470a-4c19-9edb-e0b60a016742 req-92f9234a-b379-44a1-b283-822c9bfe7e1c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Refreshing network info cache for port 0a072d7e-c128-48b9-9754-327584bc3579 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.663 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Start _get_guest_xml network_info=[{"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:21:53Z,direct_url=<?>,disk_format='qcow2',id=7d5268e2-45b5-44b2-b3c1-3da9b27b258e,min_disk=0,min_ram=0,name='tempest-scenario-img--853594115',owner='4c71a276f38f4bfebf1d3631d6f82966',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:21:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'guest_format': None, 'size': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'boot_index': 0, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'image_id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.672 189300 WARNING nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.679 189300 DEBUG nova.virt.libvirt.host [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.680 189300 DEBUG nova.virt.libvirt.host [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.690 189300 DEBUG nova.virt.libvirt.host [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.691 189300 DEBUG nova.virt.libvirt.host [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.692 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.692 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T18:16:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b177f611-8f79-4bfd-9a12-e83e9545757b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T18:21:53Z,direct_url=<?>,disk_format='qcow2',id=7d5268e2-45b5-44b2-b3c1-3da9b27b258e,min_disk=0,min_ram=0,name='tempest-scenario-img--853594115',owner='4c71a276f38f4bfebf1d3631d6f82966',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-28T18:21:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.693 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.693 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.694 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.694 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.695 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.695 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.695 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.696 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.696 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.697 189300 DEBUG nova.virt.hardware [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.700 189300 DEBUG nova.virt.libvirt.vif [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:26:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5',id=16,image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='a12ef97f-9351-448f-95c7-ab90e2c7b098'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c71a276f38f4bfebf1d3631d6f82966',ramdisk_id='',reservation_id='r-tkz6hxoq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-320555444',owner_user_name='tempest-PrometheusGabbiTest-320555444-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:26:10Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c1f6c07dc6c5400cbf4fa724992b16d3',uuid=bf6c3ac0-6e00-4be5-ae3a-454d022268e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.701 189300 DEBUG nova.network.os_vif_util [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converting VIF {"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.702 189300 DEBUG nova.network.os_vif_util [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e2:c9,bridge_name='br-int',has_traffic_filtering=True,id=0a072d7e-c128-48b9-9754-327584bc3579,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a072d7e-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.703 189300 DEBUG nova.objects.instance [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf6c3ac0-6e00-4be5-ae3a-454d022268e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.715 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] End _get_guest_xml xml=<domain type="kvm">
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <uuid>bf6c3ac0-6e00-4be5-ae3a-454d022268e8</uuid>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <name>instance-00000010</name>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <memory>131072</memory>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <vcpu>1</vcpu>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <metadata>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <nova:name>te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5</nova:name>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <nova:creationTime>2025-11-28 18:26:17</nova:creationTime>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <nova:flavor name="m1.nano">
Nov 28 18:26:17 compute-0 nova_compute[189296]:        <nova:memory>128</nova:memory>
Nov 28 18:26:17 compute-0 nova_compute[189296]:        <nova:disk>1</nova:disk>
Nov 28 18:26:17 compute-0 nova_compute[189296]:        <nova:swap>0</nova:swap>
Nov 28 18:26:17 compute-0 nova_compute[189296]:        <nova:ephemeral>0</nova:ephemeral>
Nov 28 18:26:17 compute-0 nova_compute[189296]:        <nova:vcpus>1</nova:vcpus>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      </nova:flavor>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <nova:owner>
Nov 28 18:26:17 compute-0 nova_compute[189296]:        <nova:user uuid="c1f6c07dc6c5400cbf4fa724992b16d3">tempest-PrometheusGabbiTest-320555444-project-member</nova:user>
Nov 28 18:26:17 compute-0 nova_compute[189296]:        <nova:project uuid="4c71a276f38f4bfebf1d3631d6f82966">tempest-PrometheusGabbiTest-320555444</nova:project>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      </nova:owner>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <nova:root type="image" uuid="7d5268e2-45b5-44b2-b3c1-3da9b27b258e"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <nova:ports>
Nov 28 18:26:17 compute-0 nova_compute[189296]:        <nova:port uuid="0a072d7e-c128-48b9-9754-327584bc3579">
Nov 28 18:26:17 compute-0 nova_compute[189296]:          <nova:ip type="fixed" address="10.100.1.22" ipVersion="4"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:        </nova:port>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      </nova:ports>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    </nova:instance>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  </metadata>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <sysinfo type="smbios">
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <system>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <entry name="manufacturer">RDO</entry>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <entry name="product">OpenStack Compute</entry>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <entry name="serial">bf6c3ac0-6e00-4be5-ae3a-454d022268e8</entry>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <entry name="uuid">bf6c3ac0-6e00-4be5-ae3a-454d022268e8</entry>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <entry name="family">Virtual Machine</entry>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    </system>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  </sysinfo>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <os>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <boot dev="hd"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <smbios mode="sysinfo"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  </os>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <features>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <acpi/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <apic/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <vmcoreinfo/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  </features>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <clock offset="utc">
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <timer name="pit" tickpolicy="delay"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <timer name="hpet" present="no"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  </clock>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <cpu mode="host-model" match="exact">
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <topology sockets="1" cores="1" threads="1"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  </cpu>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  <devices>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <disk type="file" device="disk">
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <target dev="vda" bus="virtio"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <disk type="file" device="cdrom">
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <driver name="qemu" type="raw" cache="none"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <source file="/var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.config"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <target dev="sda" bus="sata"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    </disk>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <interface type="ethernet">
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <mac address="fa:16:3e:c4:e2:c9"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <driver name="vhost" rx_queue_size="512"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <mtu size="1442"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <target dev="tap0a072d7e-c1"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    </interface>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <serial type="pty">
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <log file="/var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/console.log" append="off"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    </serial>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <video>
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <model type="virtio"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    </video>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <input type="tablet" bus="usb"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <rng model="virtio">
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <backend model="random">/dev/urandom</backend>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    </rng>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="pci" model="pcie-root-port"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <controller type="usb" index="0"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    <memballoon model="virtio">
Nov 28 18:26:17 compute-0 nova_compute[189296]:      <stats period="10"/>
Nov 28 18:26:17 compute-0 nova_compute[189296]:    </memballoon>
Nov 28 18:26:17 compute-0 nova_compute[189296]:  </devices>
Nov 28 18:26:17 compute-0 nova_compute[189296]: </domain>
Nov 28 18:26:17 compute-0 nova_compute[189296]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.725 189300 DEBUG nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Preparing to wait for external event network-vif-plugged-0a072d7e-c128-48b9-9754-327584bc3579 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.726 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.726 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.726 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.727 189300 DEBUG nova.virt.libvirt.vif [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-28T18:26:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5',id=16,image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='a12ef97f-9351-448f-95c7-ab90e2c7b098'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c71a276f38f4bfebf1d3631d6f82966',ramdisk_id='',reservation_id='r-tkz6hxoq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-320555444',owner_user_name='tempest-PrometheusGabbiTest-320555444-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-28T18:26:10Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c1f6c07dc6c5400cbf4fa724992b16d3',uuid=bf6c3ac0-6e00-4be5-ae3a-454d022268e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.727 189300 DEBUG nova.network.os_vif_util [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converting VIF {"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.727 189300 DEBUG nova.network.os_vif_util [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e2:c9,bridge_name='br-int',has_traffic_filtering=True,id=0a072d7e-c128-48b9-9754-327584bc3579,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a072d7e-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.728 189300 DEBUG os_vif [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e2:c9,bridge_name='br-int',has_traffic_filtering=True,id=0a072d7e-c128-48b9-9754-327584bc3579,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a072d7e-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.728 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.730 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.731 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.734 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.735 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a072d7e-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.736 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a072d7e-c1, col_values=(('external_ids', {'iface-id': '0a072d7e-c128-48b9-9754-327584bc3579', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:e2:c9', 'vm-uuid': 'bf6c3ac0-6e00-4be5-ae3a-454d022268e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.738 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:17 compute-0 NetworkManager[56307]: <info>  [1764354377.7393] manager: (tap0a072d7e-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.741 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.747 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.748 189300 INFO os_vif [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e2:c9,bridge_name='br-int',has_traffic_filtering=True,id=0a072d7e-c128-48b9-9754-327584bc3579,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a072d7e-c1')#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.791 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.791 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.792 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] No VIF found with MAC fa:16:3e:c4:e2:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 28 18:26:17 compute-0 nova_compute[189296]: 2025-11-28 18:26:17.792 189300 INFO nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Using config drive#033[00m
Nov 28 18:26:18 compute-0 nova_compute[189296]: 2025-11-28 18:26:18.568 189300 INFO nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Creating config drive at /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.config#033[00m
Nov 28 18:26:18 compute-0 nova_compute[189296]: 2025-11-28 18:26:18.575 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprdj91zdn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:26:18 compute-0 nova_compute[189296]: 2025-11-28 18:26:18.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:26:18 compute-0 nova_compute[189296]: 2025-11-28 18:26:18.723 189300 DEBUG oslo_concurrency.processutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprdj91zdn" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:26:18 compute-0 kernel: tap0a072d7e-c1: entered promiscuous mode
Nov 28 18:26:18 compute-0 NetworkManager[56307]: <info>  [1764354378.8106] manager: (tap0a072d7e-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Nov 28 18:26:18 compute-0 ovn_controller[97771]: 2025-11-28T18:26:18Z|00179|binding|INFO|Claiming lport 0a072d7e-c128-48b9-9754-327584bc3579 for this chassis.
Nov 28 18:26:18 compute-0 ovn_controller[97771]: 2025-11-28T18:26:18Z|00180|binding|INFO|0a072d7e-c128-48b9-9754-327584bc3579: Claiming fa:16:3e:c4:e2:c9 10.100.1.22
Nov 28 18:26:18 compute-0 nova_compute[189296]: 2025-11-28 18:26:18.834 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:18 compute-0 systemd-udevd[253314]: Network interface NamePolicy= disabled on kernel command line.
Nov 28 18:26:18 compute-0 ovn_controller[97771]: 2025-11-28T18:26:18Z|00181|binding|INFO|Setting lport 0a072d7e-c128-48b9-9754-327584bc3579 ovn-installed in OVS
Nov 28 18:26:18 compute-0 nova_compute[189296]: 2025-11-28 18:26:18.857 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:18 compute-0 NetworkManager[56307]: <info>  [1764354378.8656] device (tap0a072d7e-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 28 18:26:18 compute-0 NetworkManager[56307]: <info>  [1764354378.8724] device (tap0a072d7e-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 28 18:26:18 compute-0 ovn_controller[97771]: 2025-11-28T18:26:18Z|00182|binding|INFO|Setting lport 0a072d7e-c128-48b9-9754-327584bc3579 up in Southbound
Nov 28 18:26:18 compute-0 systemd-machined[155703]: New machine qemu-17-instance-00000010.
Nov 28 18:26:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:18.887 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:e2:c9 10.100.1.22'], port_security=['fa:16:3e:c4:e2:c9 10.100.1.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.22/16', 'neutron:device_id': 'bf6c3ac0-6e00-4be5-ae3a-454d022268e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c71a276f38f4bfebf1d3631d6f82966', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b7e19568-d693-4981-82d8-a6cf61584030', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21fa20d8-e3c8-4e6c-a5e8-bb4e198483f9, chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=0a072d7e-c128-48b9-9754-327584bc3579) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:26:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:18.888 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 0a072d7e-c128-48b9-9754-327584bc3579 in datapath a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5 bound to our chassis#033[00m
Nov 28 18:26:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:18.890 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5#033[00m
Nov 28 18:26:18 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000010.
Nov 28 18:26:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:18.918 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e2cd251e-001d-458e-92ce-30726107777a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:26:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:18.976 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7b8b22-ef63-4e31-9891-18092e302b80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:26:18 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:18.981 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[6ddfce95-01b0-439d-8593-e002e7143bf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:26:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:19.021 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[48b13aa0-68d5-4a2e-964a-4fe0b73ee0a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:26:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:19.050 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[2074e082-bae9-40f1-ad13-f2435220911f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa60c0580-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:11:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527149, 'reachable_time': 21227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253331, 'error': None, 'target': 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:26:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:19.072 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[41bb4c83-c7a8-4c97-a94d-daaffe33e974]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapa60c0580-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527163, 'tstamp': 527163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253332, 'error': None, 'target': 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa60c0580-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527167, 'tstamp': 527167}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253332, 'error': None, 'target': 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:26:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:19.075 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa60c0580-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.078 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:19.081 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa60c0580-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:26:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:19.082 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:26:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:19.084 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa60c0580-50, col_values=(('external_ids', {'iface-id': '29b269a8-673c-48a9-bc1f-c180355b2c1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:26:19 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:19.085 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.243 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354379.2427285, bf6c3ac0-6e00-4be5-ae3a-454d022268e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.243 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] VM Started (Lifecycle Event)#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.272 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.277 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354379.242848, bf6c3ac0-6e00-4be5-ae3a-454d022268e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.278 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] VM Paused (Lifecycle Event)#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.311 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.318 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.341 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.814 189300 DEBUG nova.network.neutron [req-6e845e40-470a-4c19-9edb-e0b60a016742 req-92f9234a-b379-44a1-b283-822c9bfe7e1c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updated VIF entry in instance network info cache for port 0a072d7e-c128-48b9-9754-327584bc3579. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.815 189300 DEBUG nova.network.neutron [req-6e845e40-470a-4c19-9edb-e0b60a016742 req-92f9234a-b379-44a1-b283-822c9bfe7e1c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updating instance_info_cache with network_info: [{"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.834 189300 DEBUG nova.compute.manager [req-ad91bed9-fc82-4fc9-84ec-87f941f6fc06 req-5ae55135-5b32-4d2c-bedc-29368a5e170a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Received event network-vif-plugged-0a072d7e-c128-48b9-9754-327584bc3579 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.834 189300 DEBUG oslo_concurrency.lockutils [req-ad91bed9-fc82-4fc9-84ec-87f941f6fc06 req-5ae55135-5b32-4d2c-bedc-29368a5e170a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.835 189300 DEBUG oslo_concurrency.lockutils [req-ad91bed9-fc82-4fc9-84ec-87f941f6fc06 req-5ae55135-5b32-4d2c-bedc-29368a5e170a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.835 189300 DEBUG oslo_concurrency.lockutils [req-ad91bed9-fc82-4fc9-84ec-87f941f6fc06 req-5ae55135-5b32-4d2c-bedc-29368a5e170a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.835 189300 DEBUG nova.compute.manager [req-ad91bed9-fc82-4fc9-84ec-87f941f6fc06 req-5ae55135-5b32-4d2c-bedc-29368a5e170a 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Processing event network-vif-plugged-0a072d7e-c128-48b9-9754-327584bc3579 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.836 189300 DEBUG nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.837 189300 DEBUG oslo_concurrency.lockutils [req-6e845e40-470a-4c19-9edb-e0b60a016742 req-92f9234a-b379-44a1-b283-822c9bfe7e1c 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Releasing lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.841 189300 DEBUG nova.virt.driver [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] Emitting event <LifecycleEvent: 1764354379.8413916, bf6c3ac0-6e00-4be5-ae3a-454d022268e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.841 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] VM Resumed (Lifecycle Event)#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.843 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.848 189300 INFO nova.virt.libvirt.driver [-] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Instance spawned successfully.#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.848 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.860 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.867 189300 DEBUG nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.872 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.872 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.873 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.874 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.874 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.875 189300 DEBUG nova.virt.libvirt.driver [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.898 189300 INFO nova.compute.manager [None req-b53da8cd-41c1-47b2-8900-60ed0c0c72fc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.942 189300 INFO nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Took 9.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 28 18:26:19 compute-0 nova_compute[189296]: 2025-11-28 18:26:19.943 189300 DEBUG nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:26:20 compute-0 nova_compute[189296]: 2025-11-28 18:26:20.040 189300 INFO nova.compute.manager [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Took 9.69 seconds to build instance.#033[00m
Nov 28 18:26:20 compute-0 nova_compute[189296]: 2025-11-28 18:26:20.059 189300 DEBUG oslo_concurrency.lockutils [None req-8734d762-e4cc-411a-81ea-a41dd77357e4 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:21 compute-0 nova_compute[189296]: 2025-11-28 18:26:21.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:26:21 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 28 18:26:21 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 28 18:26:22 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:22.462 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:26:22 compute-0 nova_compute[189296]: 2025-11-28 18:26:22.531 189300 DEBUG nova.compute.manager [req-fadb5b70-7255-4a8f-b63d-7ba595b37ace req-d287cdd1-ee3c-47f0-b9fb-68c39de9dae8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Received event network-vif-plugged-0a072d7e-c128-48b9-9754-327584bc3579 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:26:22 compute-0 nova_compute[189296]: 2025-11-28 18:26:22.532 189300 DEBUG oslo_concurrency.lockutils [req-fadb5b70-7255-4a8f-b63d-7ba595b37ace req-d287cdd1-ee3c-47f0-b9fb-68c39de9dae8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:22 compute-0 nova_compute[189296]: 2025-11-28 18:26:22.532 189300 DEBUG oslo_concurrency.lockutils [req-fadb5b70-7255-4a8f-b63d-7ba595b37ace req-d287cdd1-ee3c-47f0-b9fb-68c39de9dae8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:22 compute-0 nova_compute[189296]: 2025-11-28 18:26:22.533 189300 DEBUG oslo_concurrency.lockutils [req-fadb5b70-7255-4a8f-b63d-7ba595b37ace req-d287cdd1-ee3c-47f0-b9fb-68c39de9dae8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:22 compute-0 nova_compute[189296]: 2025-11-28 18:26:22.534 189300 DEBUG nova.compute.manager [req-fadb5b70-7255-4a8f-b63d-7ba595b37ace req-d287cdd1-ee3c-47f0-b9fb-68c39de9dae8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] No waiting events found dispatching network-vif-plugged-0a072d7e-c128-48b9-9754-327584bc3579 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:26:22 compute-0 nova_compute[189296]: 2025-11-28 18:26:22.534 189300 WARNING nova.compute.manager [req-fadb5b70-7255-4a8f-b63d-7ba595b37ace req-d287cdd1-ee3c-47f0-b9fb-68c39de9dae8 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Received unexpected event network-vif-plugged-0a072d7e-c128-48b9-9754-327584bc3579 for instance with vm_state active and task_state None.#033[00m
Nov 28 18:26:22 compute-0 nova_compute[189296]: 2025-11-28 18:26:22.739 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:23 compute-0 nova_compute[189296]: 2025-11-28 18:26:23.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:26:23 compute-0 nova_compute[189296]: 2025-11-28 18:26:23.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:26:23 compute-0 nova_compute[189296]: 2025-11-28 18:26:23.857 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:24 compute-0 nova_compute[189296]: 2025-11-28 18:26:24.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:26:26 compute-0 podman[253361]: 2025-11-28 18:26:26.031775473 +0000 UTC m=+0.090003258 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, config_id=edpm, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Nov 28 18:26:26 compute-0 podman[253362]: 2025-11-28 18:26:26.041762127 +0000 UTC m=+0.096290202 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 28 18:26:26 compute-0 podman[253363]: 2025-11-28 18:26:26.06198642 +0000 UTC m=+0.109476913 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:26:26 compute-0 nova_compute[189296]: 2025-11-28 18:26:26.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:26:26 compute-0 nova_compute[189296]: 2025-11-28 18:26:26.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:26:26 compute-0 nova_compute[189296]: 2025-11-28 18:26:26.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:26:27 compute-0 nova_compute[189296]: 2025-11-28 18:26:27.338 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:26:27 compute-0 nova_compute[189296]: 2025-11-28 18:26:27.339 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:26:27 compute-0 nova_compute[189296]: 2025-11-28 18:26:27.340 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:26:27 compute-0 nova_compute[189296]: 2025-11-28 18:26:27.341 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:26:27 compute-0 nova_compute[189296]: 2025-11-28 18:26:27.744 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:28 compute-0 nova_compute[189296]: 2025-11-28 18:26:28.859 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:28 compute-0 nova_compute[189296]: 2025-11-28 18:26:28.863 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:26:28 compute-0 nova_compute[189296]: 2025-11-28 18:26:28.883 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:26:28 compute-0 nova_compute[189296]: 2025-11-28 18:26:28.883 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:26:29 compute-0 podman[203494]: time="2025-11-28T18:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:26:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:26:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.653 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.654 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.654 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.655 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.742 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.803 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.805 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.866 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.873 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.932 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.933 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:26:30 compute-0 nova_compute[189296]: 2025-11-28 18:26:30.991 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.335 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.339 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5011MB free_disk=72.27716064453125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.340 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.340 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:31 compute-0 openstack_network_exporter[205632]: ERROR   18:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:26:31 compute-0 openstack_network_exporter[205632]: ERROR   18:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:26:31 compute-0 openstack_network_exporter[205632]: ERROR   18:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:26:31 compute-0 openstack_network_exporter[205632]: ERROR   18:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:26:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:26:31 compute-0 openstack_network_exporter[205632]: ERROR   18:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:26:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.593 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.593 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.593 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.594 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.610 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing inventories for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.626 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating ProviderTree inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.627 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.638 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing aggregate associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.663 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing trait associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, traits: HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.728 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.750 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.772 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:26:31 compute-0 nova_compute[189296]: 2025-11-28 18:26:31.773 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:32 compute-0 nova_compute[189296]: 2025-11-28 18:26:32.749 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:32 compute-0 nova_compute[189296]: 2025-11-28 18:26:32.773 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:26:33 compute-0 podman[253432]: 2025-11-28 18:26:33.032628463 +0000 UTC m=+0.086153955 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release-0.7.12=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9)
Nov 28 18:26:33 compute-0 podman[253430]: 2025-11-28 18:26:33.036411616 +0000 UTC m=+0.091925566 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:26:33 compute-0 podman[253431]: 2025-11-28 18:26:33.052870447 +0000 UTC m=+0.108217023 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 28 18:26:33 compute-0 podman[253433]: 2025-11-28 18:26:33.054185279 +0000 UTC m=+0.104416870 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 28 18:26:33 compute-0 nova_compute[189296]: 2025-11-28 18:26:33.862 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:34 compute-0 nova_compute[189296]: 2025-11-28 18:26:34.619 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:26:34 compute-0 nova_compute[189296]: 2025-11-28 18:26:34.638 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:26:34 compute-0 nova_compute[189296]: 2025-11-28 18:26:34.639 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:26:36 compute-0 podman[253505]: 2025-11-28 18:26:36.060542292 +0000 UTC m=+0.109838912 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:26:37 compute-0 nova_compute[189296]: 2025-11-28 18:26:37.753 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:38 compute-0 nova_compute[189296]: 2025-11-28 18:26:38.863 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:42 compute-0 nova_compute[189296]: 2025-11-28 18:26:42.758 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:43 compute-0 nova_compute[189296]: 2025-11-28 18:26:43.865 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:43 compute-0 podman[253530]: 2025-11-28 18:26:43.996707666 +0000 UTC m=+0.061286267 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:26:47 compute-0 nova_compute[189296]: 2025-11-28 18:26:47.762 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:48 compute-0 ovn_controller[97771]: 2025-11-28T18:26:48Z|00183|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Nov 28 18:26:48 compute-0 nova_compute[189296]: 2025-11-28 18:26:48.866 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.988 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.989 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.989 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.991 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.996 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.997 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/bf6c3ac0-6e00-4be5-ae3a-454d022268e8 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1b19fef84fe76c5f8eb41f423a94cfc31b2af00fb7940935967c184dd40fa55a" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.002 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.002 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.002 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.002 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.003 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:52.003 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:26:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:52.638 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:26:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:52.639 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:26:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:26:52.640 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:26:52 compute-0 nova_compute[189296]: 2025-11-28 18:26:52.766 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.337 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Fri, 28 Nov 2025 18:26:52 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-df356064-c728-4d18-bd69-9a2890e27568 x-openstack-request-id: req-df356064-c728-4d18-bd69-9a2890e27568 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.338 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "bf6c3ac0-6e00-4be5-ae3a-454d022268e8", "name": "te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5", "status": "ACTIVE", "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "user_id": "c1f6c07dc6c5400cbf4fa724992b16d3", "metadata": {"metering.server_group": "a12ef97f-9351-448f-95c7-ab90e2c7b098"}, "hostId": "d63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d", "image": {"id": "7d5268e2-45b5-44b2-b3c1-3da9b27b258e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7d5268e2-45b5-44b2-b3c1-3da9b27b258e"}]}, "flavor": {"id": "b177f611-8f79-4bfd-9a12-e83e9545757b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b177f611-8f79-4bfd-9a12-e83e9545757b"}]}, "created": "2025-11-28T18:26:09Z", "updated": "2025-11-28T18:26:20Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.22", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c4:e2:c9"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/bf6c3ac0-6e00-4be5-ae3a-454d022268e8"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/bf6c3ac0-6e00-4be5-ae3a-454d022268e8"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-28T18:26:19.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000010", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.338 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/bf6c3ac0-6e00-4be5-ae3a-454d022268e8 used request id req-df356064-c728-4d18-bd69-9a2890e27568 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.340 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf6c3ac0-6e00-4be5-ae3a-454d022268e8', 'name': 'te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.344 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '200bd8bc-d121-4a86-b728-ea98aac95adf', 'name': 'te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.345 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.345 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.345 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.345 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:26:53.345567) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.368 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.368 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.383 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.384 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.384 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.385 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.385 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.385 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.385 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.385 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:26:53.385552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.427 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.bytes volume: 28937216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.427 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.467 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.467 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.468 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.468 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.468 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.468 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.468 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.468 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.469 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.latency volume: 612357263 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.469 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.latency volume: 39977629 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.469 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 562549638 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.469 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 45170226 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.470 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.470 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.470 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:26:53.468923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.470 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.471 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:26:53.471069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.471 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.474 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for bf6c3ac0-6e00-4be5-ae3a-454d022268e8 / tap0a072d7e-c1 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.475 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.478 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.478 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.478 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.478 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.478 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.478 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.478 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.479 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:26:53.478963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.502 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/memory.usage volume: 40.46875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.523 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/memory.usage volume: 42.8359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.524 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.524 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.524 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.524 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.524 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.524 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.524 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.usage volume: 29097984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.525 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:26:53.524775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.525 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.526 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.526 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.526 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.526 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.526 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.526 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.526 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.527 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.bytes volume: 72695808 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.527 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:26:53.526939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.527 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.528 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.528 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.528 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.528 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.528 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.528 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.529 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.529 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.529 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.529 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.529 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.529 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.529 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.530 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.530 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.530 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.latency volume: 2874076560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.530 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.530 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 2362907010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.530 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.531 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.531 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.531 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.531 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.531 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.531 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.531 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.requests volume: 300 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.531 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.532 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.532 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.532 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.532 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.532 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.532 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.533 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.533 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.533 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.533 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:26:53.529038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:26:53.530082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:26:53.531622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:26:53.533087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.534 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.534 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.534 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.534 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.534 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.534 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.534 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.535 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.535 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.535 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.535 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.535 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.535 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.535 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.535 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/cpu volume: 32100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.536 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/cpu volume: 285190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.536 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.536 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.536 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.536 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.536 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.536 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.537 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:26:53.534763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:26:53.535884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-28T18:26:53.536915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.537 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>]
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.537 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.537 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.537 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.538 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.538 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.538 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.538 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.538 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.538 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.538 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:26:53.538090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.539 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.539 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets volume: 8 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.539 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.539 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:26:53.539305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.540 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.540 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.540 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.540 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.540 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.541 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.541 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.541 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.541 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.541 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.541 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.541 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.541 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.542 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.542 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.542 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.542 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.542 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.542 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.542 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.bytes volume: 266 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.543 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.543 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.543 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.543 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.543 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.543 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.543 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.543 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.allocation volume: 29302784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.544 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.544 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:26:53.540917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:26:53.541707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:26:53.542828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:26:53.543834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.545 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.545 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.545 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.545 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.545 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.545 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.545 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.546 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.546 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.546 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.546 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.546 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.546 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.546 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.546 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.547 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.547 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>]
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.547 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.547 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.547 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.547 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.547 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.547 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.bytes volume: 616 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.547 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes volume: 1436 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:26:53.545927) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-28T18:26:53.546933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:26:53.547611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.549 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.549 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.549 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.549 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.549 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.549 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.549 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets volume: 3 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.549 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:26:53.549551) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.550 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.550 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.550 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.550 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.550 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.550 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.551 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.551 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.551 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.551 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.551 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.551 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.551 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.551 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.551 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.requests volume: 1041 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.552 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.552 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.552 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.552 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.553 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.553 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.553 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.553 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.554 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.554 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:26:53.550918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:26:53.551891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.554 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.555 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.555 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.555 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.555 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.555 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.556 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.557 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.558 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.558 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.558 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.558 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:26:53.558 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:26:53 compute-0 ovn_controller[97771]: 2025-11-28T18:26:53Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:e2:c9 10.100.1.22
Nov 28 18:26:53 compute-0 ovn_controller[97771]: 2025-11-28T18:26:53Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:e2:c9 10.100.1.22
Nov 28 18:26:53 compute-0 nova_compute[189296]: 2025-11-28 18:26:53.868 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:57 compute-0 podman[253570]: 2025-11-28 18:26:57.046945917 +0000 UTC m=+0.094478347 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 28 18:26:57 compute-0 podman[253569]: 2025-11-28 18:26:57.057997857 +0000 UTC m=+0.110718604 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, io.buildah.version=1.41.4)
Nov 28 18:26:57 compute-0 podman[253568]: 2025-11-28 18:26:57.064154998 +0000 UTC m=+0.119669273 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 28 18:26:57 compute-0 nova_compute[189296]: 2025-11-28 18:26:57.778 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:58 compute-0 nova_compute[189296]: 2025-11-28 18:26:58.869 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:26:59 compute-0 podman[203494]: time="2025-11-28T18:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:26:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:26:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 28 18:27:01 compute-0 openstack_network_exporter[205632]: ERROR   18:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:27:01 compute-0 openstack_network_exporter[205632]: ERROR   18:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:27:01 compute-0 openstack_network_exporter[205632]: ERROR   18:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:27:01 compute-0 openstack_network_exporter[205632]: ERROR   18:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:27:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:27:01 compute-0 openstack_network_exporter[205632]: ERROR   18:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:27:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:27:02 compute-0 nova_compute[189296]: 2025-11-28 18:27:02.781 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:03 compute-0 nova_compute[189296]: 2025-11-28 18:27:03.871 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:04 compute-0 podman[253624]: 2025-11-28 18:27:04.037558926 +0000 UTC m=+0.097113321 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 18:27:04 compute-0 podman[253626]: 2025-11-28 18:27:04.052398439 +0000 UTC m=+0.106401548 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container)
Nov 28 18:27:04 compute-0 podman[253625]: 2025-11-28 18:27:04.052951182 +0000 UTC m=+0.110061978 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 18:27:04 compute-0 podman[253627]: 2025-11-28 18:27:04.056256143 +0000 UTC m=+0.105641140 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 28 18:27:07 compute-0 podman[253697]: 2025-11-28 18:27:07.089186195 +0000 UTC m=+0.134786292 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:27:07 compute-0 nova_compute[189296]: 2025-11-28 18:27:07.784 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:08 compute-0 nova_compute[189296]: 2025-11-28 18:27:08.873 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:12 compute-0 nova_compute[189296]: 2025-11-28 18:27:12.787 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:13 compute-0 nova_compute[189296]: 2025-11-28 18:27:13.875 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:14 compute-0 podman[253723]: 2025-11-28 18:27:14.74978802 +0000 UTC m=+0.078999390 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:27:17 compute-0 nova_compute[189296]: 2025-11-28 18:27:17.790 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:18 compute-0 nova_compute[189296]: 2025-11-28 18:27:18.878 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:19 compute-0 nova_compute[189296]: 2025-11-28 18:27:19.639 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:27:22 compute-0 nova_compute[189296]: 2025-11-28 18:27:22.793 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:23 compute-0 nova_compute[189296]: 2025-11-28 18:27:23.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:27:23 compute-0 nova_compute[189296]: 2025-11-28 18:27:23.880 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:24 compute-0 nova_compute[189296]: 2025-11-28 18:27:24.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:27:25 compute-0 nova_compute[189296]: 2025-11-28 18:27:25.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:27:25 compute-0 nova_compute[189296]: 2025-11-28 18:27:25.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:27:27 compute-0 nova_compute[189296]: 2025-11-28 18:27:27.799 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:28 compute-0 podman[253749]: 2025-11-28 18:27:28.05104741 +0000 UTC m=+0.088033410 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 28 18:27:28 compute-0 podman[253748]: 2025-11-28 18:27:28.07072639 +0000 UTC m=+0.116193587 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=f26160204c78771e78cdd2489258319b, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 28 18:27:28 compute-0 podman[253747]: 2025-11-28 18:27:28.077389903 +0000 UTC m=+0.126745846 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Nov 28 18:27:28 compute-0 nova_compute[189296]: 2025-11-28 18:27:28.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:27:28 compute-0 nova_compute[189296]: 2025-11-28 18:27:28.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:27:28 compute-0 nova_compute[189296]: 2025-11-28 18:27:28.881 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:29 compute-0 nova_compute[189296]: 2025-11-28 18:27:29.379 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:27:29 compute-0 nova_compute[189296]: 2025-11-28 18:27:29.380 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:27:29 compute-0 nova_compute[189296]: 2025-11-28 18:27:29.381 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:27:29 compute-0 podman[203494]: time="2025-11-28T18:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:27:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:27:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4787 "" "Go-http-client/1.1"
Nov 28 18:27:31 compute-0 openstack_network_exporter[205632]: ERROR   18:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:27:31 compute-0 openstack_network_exporter[205632]: ERROR   18:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:27:31 compute-0 openstack_network_exporter[205632]: ERROR   18:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:27:31 compute-0 openstack_network_exporter[205632]: ERROR   18:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:27:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:27:31 compute-0 openstack_network_exporter[205632]: ERROR   18:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:27:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.633 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updating instance_info_cache with network_info: [{"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.657 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.657 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.658 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.659 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.684 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.684 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.685 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.685 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.761 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.860 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.861 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.923 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.931 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.988 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:27:31 compute-0 nova_compute[189296]: 2025-11-28 18:27:31.990 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.049 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.364 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.365 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4971MB free_disk=72.24919128417969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.366 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.366 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.474 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.474 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.475 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.475 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.537 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.550 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.551 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.552 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:27:32 compute-0 nova_compute[189296]: 2025-11-28 18:27:32.803 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:33 compute-0 nova_compute[189296]: 2025-11-28 18:27:33.884 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:35 compute-0 podman[253815]: 2025-11-28 18:27:35.030826885 +0000 UTC m=+0.079658455 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 28 18:27:35 compute-0 podman[253813]: 2025-11-28 18:27:35.035455878 +0000 UTC m=+0.092514429 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 18:27:35 compute-0 podman[253814]: 2025-11-28 18:27:35.051192032 +0000 UTC m=+0.107542465 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=edpm, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, container_name=kepler)
Nov 28 18:27:35 compute-0 podman[253812]: 2025-11-28 18:27:35.069435937 +0000 UTC m=+0.119940468 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:27:36 compute-0 nova_compute[189296]: 2025-11-28 18:27:36.518 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:27:36 compute-0 nova_compute[189296]: 2025-11-28 18:27:36.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:27:37 compute-0 nova_compute[189296]: 2025-11-28 18:27:37.807 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:38 compute-0 podman[253892]: 2025-11-28 18:27:38.091128486 +0000 UTC m=+0.140408490 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:27:38 compute-0 nova_compute[189296]: 2025-11-28 18:27:38.887 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:42 compute-0 nova_compute[189296]: 2025-11-28 18:27:42.812 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:43 compute-0 nova_compute[189296]: 2025-11-28 18:27:43.890 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:45 compute-0 podman[253919]: 2025-11-28 18:27:45.011270815 +0000 UTC m=+0.078344784 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:27:47 compute-0 nova_compute[189296]: 2025-11-28 18:27:47.816 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.626 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.627 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.628 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.628 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.629 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.630 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.655 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.668 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.668 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Image id 7d5268e2-45b5-44b2-b3c1-3da9b27b258e yields fingerprint ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.669 189300 INFO nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] image 7d5268e2-45b5-44b2-b3c1-3da9b27b258e at (/var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa): checking#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.669 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] image 7d5268e2-45b5-44b2-b3c1-3da9b27b258e at (/var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.672 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.672 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] 200bd8bc-d121-4a86-b728-ea98aac95adf is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.673 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] 200bd8bc-d121-4a86-b728-ea98aac95adf has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.673 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.757 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.759 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf is backed by ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.759 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] bf6c3ac0-6e00-4be5-ae3a-454d022268e8 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.759 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] bf6c3ac0-6e00-4be5-ae3a-454d022268e8 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.760 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.852 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.854 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 is backed by ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.854 189300 WARNING nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Unknown base file: /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.855 189300 WARNING nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Unknown base file: /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.855 189300 WARNING nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Unknown base file: /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.855 189300 INFO nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Active base files: /var/lib/nova/instances/_base/ef920c1e18b8d4893a37ced7af16cdbce2c2e0aa#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.856 189300 INFO nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Removable base files: /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598 /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.857 189300 INFO nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/f8e1ccb00af4752d8a5c7b44d7152dd9458fb598#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.857 189300 INFO nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/14d87f60afaabf504203a4757919b9a5f2b5b19a#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.858 189300 INFO nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/98857e8e8776e503eed9cdcd9e8eeb7fa1d0da6c#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.858 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.858 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.859 189300 DEBUG nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.859 189300 INFO nova.virt.libvirt.imagecache [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 28 18:27:48 compute-0 nova_compute[189296]: 2025-11-28 18:27:48.892 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:27:52.640 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:27:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:27:52.640 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:27:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:27:52.641 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:27:52 compute-0 nova_compute[189296]: 2025-11-28 18:27:52.819 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:53 compute-0 nova_compute[189296]: 2025-11-28 18:27:53.896 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:57 compute-0 nova_compute[189296]: 2025-11-28 18:27:57.825 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:58 compute-0 nova_compute[189296]: 2025-11-28 18:27:58.898 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:27:59 compute-0 podman[253960]: 2025-11-28 18:27:59.024424293 +0000 UTC m=+0.082353621 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Nov 28 18:27:59 compute-0 podman[253962]: 2025-11-28 18:27:59.045285192 +0000 UTC m=+0.090284705 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 28 18:27:59 compute-0 podman[253961]: 2025-11-28 18:27:59.051808691 +0000 UTC m=+0.104313378 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 28 18:27:59 compute-0 podman[203494]: time="2025-11-28T18:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:27:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:27:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Nov 28 18:28:01 compute-0 openstack_network_exporter[205632]: ERROR   18:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:28:01 compute-0 openstack_network_exporter[205632]: ERROR   18:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:28:01 compute-0 openstack_network_exporter[205632]: ERROR   18:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:28:01 compute-0 openstack_network_exporter[205632]: ERROR   18:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:28:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:28:01 compute-0 openstack_network_exporter[205632]: ERROR   18:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:28:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:28:02 compute-0 nova_compute[189296]: 2025-11-28 18:28:02.830 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:03 compute-0 nova_compute[189296]: 2025-11-28 18:28:03.900 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:06 compute-0 podman[254014]: 2025-11-28 18:28:06.044382769 +0000 UTC m=+0.090135251 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 28 18:28:06 compute-0 podman[254015]: 2025-11-28 18:28:06.049503824 +0000 UTC m=+0.087900017 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-type=git, distribution-scope=public, name=ubi9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 28 18:28:06 compute-0 podman[254021]: 2025-11-28 18:28:06.058732779 +0000 UTC m=+0.093682107 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:28:06 compute-0 podman[254013]: 2025-11-28 18:28:06.066867298 +0000 UTC m=+0.121040666 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:28:07 compute-0 nova_compute[189296]: 2025-11-28 18:28:07.838 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:08 compute-0 nova_compute[189296]: 2025-11-28 18:28:08.905 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:09 compute-0 podman[254092]: 2025-11-28 18:28:09.03671963 +0000 UTC m=+0.092400797 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 28 18:28:12 compute-0 nova_compute[189296]: 2025-11-28 18:28:12.844 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:13 compute-0 nova_compute[189296]: 2025-11-28 18:28:13.912 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:16 compute-0 podman[254119]: 2025-11-28 18:28:16.049157923 +0000 UTC m=+0.111087714 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:28:17 compute-0 nova_compute[189296]: 2025-11-28 18:28:17.848 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:18 compute-0 nova_compute[189296]: 2025-11-28 18:28:18.915 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:21 compute-0 nova_compute[189296]: 2025-11-28 18:28:21.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:21 compute-0 nova_compute[189296]: 2025-11-28 18:28:21.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:22 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 28 18:28:22 compute-0 nova_compute[189296]: 2025-11-28 18:28:22.852 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:23 compute-0 nova_compute[189296]: 2025-11-28 18:28:23.919 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:24 compute-0 nova_compute[189296]: 2025-11-28 18:28:24.636 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:25 compute-0 nova_compute[189296]: 2025-11-28 18:28:25.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:27 compute-0 nova_compute[189296]: 2025-11-28 18:28:27.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:27 compute-0 nova_compute[189296]: 2025-11-28 18:28:27.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:28:27 compute-0 nova_compute[189296]: 2025-11-28 18:28:27.857 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:28 compute-0 nova_compute[189296]: 2025-11-28 18:28:28.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:28 compute-0 nova_compute[189296]: 2025-11-28 18:28:28.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 28 18:28:28 compute-0 nova_compute[189296]: 2025-11-28 18:28:28.921 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:29 compute-0 nova_compute[189296]: 2025-11-28 18:28:29.641 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:29 compute-0 nova_compute[189296]: 2025-11-28 18:28:29.642 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:28:29 compute-0 nova_compute[189296]: 2025-11-28 18:28:29.642 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:28:29 compute-0 podman[203494]: time="2025-11-28T18:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:28:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:28:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Nov 28 18:28:30 compute-0 podman[254167]: 2025-11-28 18:28:30.039267549 +0000 UTC m=+0.077108883 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 28 18:28:30 compute-0 podman[254160]: 2025-11-28 18:28:30.045593443 +0000 UTC m=+0.105312142 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350)
Nov 28 18:28:30 compute-0 podman[254161]: 2025-11-28 18:28:30.051389664 +0000 UTC m=+0.102587585 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, tcib_build_tag=f26160204c78771e78cdd2489258319b, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:28:30 compute-0 nova_compute[189296]: 2025-11-28 18:28:30.474 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:28:30 compute-0 nova_compute[189296]: 2025-11-28 18:28:30.475 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:28:30 compute-0 nova_compute[189296]: 2025-11-28 18:28:30.475 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:28:30 compute-0 nova_compute[189296]: 2025-11-28 18:28:30.476 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:28:31 compute-0 openstack_network_exporter[205632]: ERROR   18:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:28:31 compute-0 openstack_network_exporter[205632]: ERROR   18:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:28:31 compute-0 openstack_network_exporter[205632]: ERROR   18:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:28:31 compute-0 openstack_network_exporter[205632]: ERROR   18:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:28:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:28:31 compute-0 openstack_network_exporter[205632]: ERROR   18:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:28:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.499 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.521 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.521 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.659 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.660 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.660 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.661 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.731 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.793 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.795 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.853 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.860 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.861 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.920 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.922 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:28:32 compute-0 nova_compute[189296]: 2025-11-28 18:28:32.986 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.356 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.357 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4974MB free_disk=72.24905014038086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.358 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.358 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.527 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.529 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.530 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.530 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.689 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.726 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.728 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.729 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:28:33 compute-0 nova_compute[189296]: 2025-11-28 18:28:33.923 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:34 compute-0 nova_compute[189296]: 2025-11-28 18:28:34.729 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:35 compute-0 nova_compute[189296]: 2025-11-28 18:28:35.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:36 compute-0 nova_compute[189296]: 2025-11-28 18:28:36.627 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:37 compute-0 podman[254227]: 2025-11-28 18:28:37.008904184 +0000 UTC m=+0.068831871 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:28:37 compute-0 podman[254228]: 2025-11-28 18:28:37.009715405 +0000 UTC m=+0.064928556 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 28 18:28:37 compute-0 podman[254229]: 2025-11-28 18:28:37.035640638 +0000 UTC m=+0.084627367 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, vcs-type=git, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 28 18:28:37 compute-0 podman[254235]: 2025-11-28 18:28:37.04928625 +0000 UTC m=+0.095661536 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 28 18:28:37 compute-0 nova_compute[189296]: 2025-11-28 18:28:37.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:37 compute-0 nova_compute[189296]: 2025-11-28 18:28:37.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:37 compute-0 nova_compute[189296]: 2025-11-28 18:28:37.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 28 18:28:37 compute-0 nova_compute[189296]: 2025-11-28 18:28:37.642 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 28 18:28:37 compute-0 nova_compute[189296]: 2025-11-28 18:28:37.864 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:38 compute-0 nova_compute[189296]: 2025-11-28 18:28:38.928 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:40 compute-0 podman[254307]: 2025-11-28 18:28:40.089372118 +0000 UTC m=+0.140890701 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:28:42 compute-0 nova_compute[189296]: 2025-11-28 18:28:42.868 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:43 compute-0 nova_compute[189296]: 2025-11-28 18:28:43.929 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:47 compute-0 podman[254333]: 2025-11-28 18:28:47.037326066 +0000 UTC m=+0.086528314 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:28:47 compute-0 nova_compute[189296]: 2025-11-28 18:28:47.875 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:48 compute-0 nova_compute[189296]: 2025-11-28 18:28:48.935 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:49 compute-0 nova_compute[189296]: 2025-11-28 18:28:49.993 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:28:50 compute-0 nova_compute[189296]: 2025-11-28 18:28:50.017 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Triggering sync for uuid 200bd8bc-d121-4a86-b728-ea98aac95adf _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 28 18:28:50 compute-0 nova_compute[189296]: 2025-11-28 18:28:50.018 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Triggering sync for uuid bf6c3ac0-6e00-4be5-ae3a-454d022268e8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 28 18:28:50 compute-0 nova_compute[189296]: 2025-11-28 18:28:50.019 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "200bd8bc-d121-4a86-b728-ea98aac95adf" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:28:50 compute-0 nova_compute[189296]: 2025-11-28 18:28:50.019 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:28:50 compute-0 nova_compute[189296]: 2025-11-28 18:28:50.020 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:28:50 compute-0 nova_compute[189296]: 2025-11-28 18:28:50.020 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:28:50 compute-0 nova_compute[189296]: 2025-11-28 18:28:50.059 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:28:50 compute-0 nova_compute[189296]: 2025-11-28 18:28:50.062 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:28:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:51.991 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:28:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:51.994 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:28:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:51.996 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.002 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.003 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.003 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf6c3ac0-6e00-4be5-ae3a-454d022268e8', 'name': 'te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.003 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.005 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.005 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.006 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.007 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.007 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.008 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.008 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '200bd8bc-d121-4a86-b728-ea98aac95adf', 'name': 'te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.009 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.009 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.009 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.010 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.009 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{'inspect_disk_info': {}}], pollster history [{'disk.device.capacity': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.011 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{'inspect_disk_info': {}}], pollster history [{'disk.device.capacity': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:28:52.010169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.011 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{'inspect_disk_info': {}}], pollster history [{'disk.device.capacity': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.012 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{'inspect_disk_info': {}}], pollster history [{'disk.device.capacity': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.013 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{'inspect_disk_info': {}}], pollster history [{'disk.device.capacity': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.013 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{'inspect_disk_info': {}}], pollster history [{'disk.device.capacity': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.014 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{'inspect_disk_info': {}}], pollster history [{'disk.device.capacity': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.014 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f69cd0>] with cache [{'inspect_disk_info': {}}], pollster history [{'disk.device.capacity': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5>, <NovaLikeServer: te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.034 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.035 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.055 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.056 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.056 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.056 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.056 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.057 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.057 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.057 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:28:52.057229) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.117 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.bytes volume: 28937216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.118 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.181 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.182 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.182 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.183 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.183 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.183 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.183 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.183 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.183 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.latency volume: 612357263 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.184 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.latency volume: 39977629 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:28:52.183543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.184 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 597042360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.184 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 54497620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.185 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.185 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.185 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.185 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.185 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.185 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:28:52.185865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.191 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.194 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.195 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.195 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.195 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.195 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.196 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:28:52.196003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.227 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/memory.usage volume: 43.30078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.266 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/memory.usage volume: 42.33984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.267 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.267 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.267 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.268 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.268 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.268 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.268 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.269 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:28:52.268579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.270 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.271 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.272 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.272 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.272 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.272 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.273 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.273 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.273 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.274 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:28:52.273340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.275 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.275 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.276 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.276 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.277 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.277 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.277 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.277 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:28:52.277786) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.278 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.279 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.279 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.280 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.280 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.280 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.280 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.281 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.281 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.latency volume: 2902152956 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.281 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.282 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 2414331628 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.282 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.283 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.283 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.284 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.284 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.284 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.284 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.285 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.285 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.286 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.286 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.287 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.287 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.288 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.288 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.288 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.288 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.288 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.bytes.delta volume: 1360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.289 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.290 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.290 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.290 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.291 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.291 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:28:52.280994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:28:52.284905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:28:52.288684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:28:52.291810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.291 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.292 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.292 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.293 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.294 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.294 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.294 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.294 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:28:52.294970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.295 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.295 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/cpu volume: 150500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.296 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/cpu volume: 332500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.297 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.297 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.297 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.297 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.298 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.298 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.298 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.298 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.299 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:28:52.298680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.299 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.299 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.299 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.299 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.300 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:28:52.300045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.300 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.300 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.301 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.301 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.301 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.301 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.301 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.301 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.302 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:28:52.301699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.302 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.302 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.302 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.302 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.302 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.303 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.303 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.303 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.303 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.304 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:28:52.303049) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.304 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.304 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.304 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.304 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.304 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:28:52.304844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.305 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.305 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.306 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.306 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.306 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.306 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.306 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.306 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:28:52.306555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.307 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.307 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.307 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.308 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.308 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.308 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.308 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.308 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.308 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.309 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.bytes.delta volume: 1354 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.309 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.309 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.310 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:28:52.308883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.310 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.310 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.310 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.311 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.311 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.311 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.311 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:28:52.311294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.311 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.312 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.312 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.312 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.312 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.312 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.312 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.313 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.313 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:28:52.312945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.314 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.314 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.314 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.314 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.314 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.314 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.314 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:28:52.314732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.315 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.315 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.316 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.316 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.316 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.316 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.316 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.316 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.requests volume: 1041 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:28:52.316531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.317 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.317 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.317 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.318 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.319 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.319 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.319 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.319 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.319 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.319 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.319 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.320 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:28:52.321 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:28:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:28:52.641 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:28:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:28:52.642 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:28:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:28:52.643 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:28:52 compute-0 nova_compute[189296]: 2025-11-28 18:28:52.880 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:53 compute-0 nova_compute[189296]: 2025-11-28 18:28:53.936 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:57 compute-0 nova_compute[189296]: 2025-11-28 18:28:57.883 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:58 compute-0 nova_compute[189296]: 2025-11-28 18:28:58.938 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:28:59 compute-0 podman[203494]: time="2025-11-28T18:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:28:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:28:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 28 18:29:01 compute-0 podman[254359]: 2025-11-28 18:29:01.069761925 +0000 UTC m=+0.107254870 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, vendor=Red Hat, Inc.)
Nov 28 18:29:01 compute-0 podman[254361]: 2025-11-28 18:29:01.078840787 +0000 UTC m=+0.100322560 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 28 18:29:01 compute-0 podman[254360]: 2025-11-28 18:29:01.093549746 +0000 UTC m=+0.122551703 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:29:01 compute-0 openstack_network_exporter[205632]: ERROR   18:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:29:01 compute-0 openstack_network_exporter[205632]: ERROR   18:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:29:01 compute-0 openstack_network_exporter[205632]: ERROR   18:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:29:01 compute-0 openstack_network_exporter[205632]: ERROR   18:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:29:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:29:01 compute-0 openstack_network_exporter[205632]: ERROR   18:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:29:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:29:02 compute-0 nova_compute[189296]: 2025-11-28 18:29:02.886 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:03 compute-0 nova_compute[189296]: 2025-11-28 18:29:03.940 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:07 compute-0 nova_compute[189296]: 2025-11-28 18:29:07.890 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:08 compute-0 podman[254416]: 2025-11-28 18:29:08.017668292 +0000 UTC m=+0.067430997 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:29:08 compute-0 podman[254417]: 2025-11-28 18:29:08.056715395 +0000 UTC m=+0.093774980 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:29:08 compute-0 podman[254422]: 2025-11-28 18:29:08.060507688 +0000 UTC m=+0.090883630 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., release-0.7.12=)
Nov 28 18:29:08 compute-0 podman[254424]: 2025-11-28 18:29:08.075139085 +0000 UTC m=+0.102069832 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Nov 28 18:29:08 compute-0 nova_compute[189296]: 2025-11-28 18:29:08.942 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:11 compute-0 podman[254490]: 2025-11-28 18:29:11.077429789 +0000 UTC m=+0.130816845 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:29:12 compute-0 nova_compute[189296]: 2025-11-28 18:29:12.896 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:13 compute-0 nova_compute[189296]: 2025-11-28 18:29:13.944 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:17 compute-0 nova_compute[189296]: 2025-11-28 18:29:17.901 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:18 compute-0 podman[254514]: 2025-11-28 18:29:18.025869438 +0000 UTC m=+0.073330171 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:29:18 compute-0 nova_compute[189296]: 2025-11-28 18:29:18.947 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:22 compute-0 nova_compute[189296]: 2025-11-28 18:29:22.649 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:29:22 compute-0 nova_compute[189296]: 2025-11-28 18:29:22.905 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:23 compute-0 nova_compute[189296]: 2025-11-28 18:29:23.949 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:25 compute-0 nova_compute[189296]: 2025-11-28 18:29:25.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:29:27 compute-0 nova_compute[189296]: 2025-11-28 18:29:27.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:29:27 compute-0 nova_compute[189296]: 2025-11-28 18:29:27.910 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:28 compute-0 nova_compute[189296]: 2025-11-28 18:29:28.951 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:29 compute-0 nova_compute[189296]: 2025-11-28 18:29:29.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:29:29 compute-0 nova_compute[189296]: 2025-11-28 18:29:29.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:29:29 compute-0 podman[203494]: time="2025-11-28T18:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:29:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:29:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 28 18:29:31 compute-0 openstack_network_exporter[205632]: ERROR   18:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:29:31 compute-0 openstack_network_exporter[205632]: ERROR   18:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:29:31 compute-0 openstack_network_exporter[205632]: ERROR   18:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:29:31 compute-0 openstack_network_exporter[205632]: ERROR   18:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:29:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:29:31 compute-0 openstack_network_exporter[205632]: ERROR   18:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:29:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:29:31 compute-0 nova_compute[189296]: 2025-11-28 18:29:31.628 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:29:31 compute-0 nova_compute[189296]: 2025-11-28 18:29:31.629 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:29:32 compute-0 podman[254540]: 2025-11-28 18:29:32.032701932 +0000 UTC m=+0.089050964 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, org.label-schema.license=GPLv2)
Nov 28 18:29:32 compute-0 podman[254541]: 2025-11-28 18:29:32.046429098 +0000 UTC m=+0.098265260 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 28 18:29:32 compute-0 podman[254539]: 2025-11-28 18:29:32.050494097 +0000 UTC m=+0.101145300 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc.)
Nov 28 18:29:32 compute-0 nova_compute[189296]: 2025-11-28 18:29:32.512 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:29:32 compute-0 nova_compute[189296]: 2025-11-28 18:29:32.513 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:29:32 compute-0 nova_compute[189296]: 2025-11-28 18:29:32.513 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:29:32 compute-0 nova_compute[189296]: 2025-11-28 18:29:32.915 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:33 compute-0 nova_compute[189296]: 2025-11-28 18:29:33.954 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.763 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updating instance_info_cache with network_info: [{"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.779 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.780 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.781 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.781 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.805 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.805 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.806 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.806 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.887 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.946 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:29:34 compute-0 nova_compute[189296]: 2025-11-28 18:29:34.947 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.004 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.012 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.070 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.072 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.153 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.494 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.495 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4966MB free_disk=72.24916076660156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.496 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.497 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.584 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.585 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.585 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.585 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.761 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.786 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.788 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:29:35 compute-0 nova_compute[189296]: 2025-11-28 18:29:35.789 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:29:37 compute-0 nova_compute[189296]: 2025-11-28 18:29:37.633 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:29:37 compute-0 nova_compute[189296]: 2025-11-28 18:29:37.919 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:38 compute-0 nova_compute[189296]: 2025-11-28 18:29:38.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:29:38 compute-0 nova_compute[189296]: 2025-11-28 18:29:38.958 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:39 compute-0 podman[254608]: 2025-11-28 18:29:39.004346429 +0000 UTC m=+0.063900141 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:29:39 compute-0 podman[254611]: 2025-11-28 18:29:39.043958166 +0000 UTC m=+0.088074681 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:29:39 compute-0 podman[254609]: 2025-11-28 18:29:39.04904487 +0000 UTC m=+0.102626046 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 18:29:39 compute-0 podman[254610]: 2025-11-28 18:29:39.062808456 +0000 UTC m=+0.105714401 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, name=ubi9, release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 28 18:29:42 compute-0 podman[254689]: 2025-11-28 18:29:42.049339495 +0000 UTC m=+0.106634934 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:29:42 compute-0 nova_compute[189296]: 2025-11-28 18:29:42.923 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:43 compute-0 nova_compute[189296]: 2025-11-28 18:29:43.958 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:47 compute-0 nova_compute[189296]: 2025-11-28 18:29:47.934 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:48 compute-0 nova_compute[189296]: 2025-11-28 18:29:48.963 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:49 compute-0 podman[254715]: 2025-11-28 18:29:49.018289297 +0000 UTC m=+0.080079187 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:29:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:29:52.643 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:29:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:29:52.644 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:29:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:29:52.645 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:29:52 compute-0 nova_compute[189296]: 2025-11-28 18:29:52.938 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:53 compute-0 nova_compute[189296]: 2025-11-28 18:29:53.963 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:57 compute-0 nova_compute[189296]: 2025-11-28 18:29:57.942 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:58 compute-0 nova_compute[189296]: 2025-11-28 18:29:58.965 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:29:59 compute-0 podman[203494]: time="2025-11-28T18:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:29:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:29:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 28 18:30:01 compute-0 openstack_network_exporter[205632]: ERROR   18:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:30:01 compute-0 openstack_network_exporter[205632]: ERROR   18:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:30:01 compute-0 openstack_network_exporter[205632]: ERROR   18:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:30:01 compute-0 openstack_network_exporter[205632]: ERROR   18:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:30:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:30:01 compute-0 openstack_network_exporter[205632]: ERROR   18:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:30:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:30:02 compute-0 nova_compute[189296]: 2025-11-28 18:30:02.945 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:03 compute-0 podman[254740]: 2025-11-28 18:30:03.013542048 +0000 UTC m=+0.074751935 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Nov 28 18:30:03 compute-0 podman[254741]: 2025-11-28 18:30:03.016777417 +0000 UTC m=+0.065978662 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 18:30:03 compute-0 podman[254742]: 2025-11-28 18:30:03.044995217 +0000 UTC m=+0.081792629 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 28 18:30:03 compute-0 nova_compute[189296]: 2025-11-28 18:30:03.971 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:07 compute-0 nova_compute[189296]: 2025-11-28 18:30:07.949 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:08 compute-0 nova_compute[189296]: 2025-11-28 18:30:08.973 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:10 compute-0 podman[254797]: 2025-11-28 18:30:10.01862126 +0000 UTC m=+0.075246007 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 28 18:30:10 compute-0 podman[254796]: 2025-11-28 18:30:10.026474563 +0000 UTC m=+0.085297174 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:30:10 compute-0 podman[254799]: 2025-11-28 18:30:10.033358181 +0000 UTC m=+0.080982358 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:30:10 compute-0 podman[254798]: 2025-11-28 18:30:10.043485628 +0000 UTC m=+0.097566913 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0)
Nov 28 18:30:12 compute-0 nova_compute[189296]: 2025-11-28 18:30:12.953 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:13 compute-0 podman[254871]: 2025-11-28 18:30:13.101925492 +0000 UTC m=+0.149154212 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:30:13 compute-0 nova_compute[189296]: 2025-11-28 18:30:13.975 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:17 compute-0 nova_compute[189296]: 2025-11-28 18:30:17.957 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:18 compute-0 nova_compute[189296]: 2025-11-28 18:30:18.977 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:19 compute-0 podman[254896]: 2025-11-28 18:30:19.524922715 +0000 UTC m=+0.079363338 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:30:22 compute-0 nova_compute[189296]: 2025-11-28 18:30:22.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:30:22 compute-0 nova_compute[189296]: 2025-11-28 18:30:22.961 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:23 compute-0 nova_compute[189296]: 2025-11-28 18:30:23.980 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:25 compute-0 nova_compute[189296]: 2025-11-28 18:30:25.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:30:27 compute-0 nova_compute[189296]: 2025-11-28 18:30:27.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:30:27 compute-0 nova_compute[189296]: 2025-11-28 18:30:27.965 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:28 compute-0 nova_compute[189296]: 2025-11-28 18:30:28.984 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:29 compute-0 podman[203494]: time="2025-11-28T18:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:30:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:30:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 28 18:30:30 compute-0 nova_compute[189296]: 2025-11-28 18:30:30.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:30:30 compute-0 nova_compute[189296]: 2025-11-28 18:30:30.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:30:31 compute-0 openstack_network_exporter[205632]: ERROR   18:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:30:31 compute-0 openstack_network_exporter[205632]: ERROR   18:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:30:31 compute-0 openstack_network_exporter[205632]: ERROR   18:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:30:31 compute-0 openstack_network_exporter[205632]: ERROR   18:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:30:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:30:31 compute-0 openstack_network_exporter[205632]: ERROR   18:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:30:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:30:32 compute-0 nova_compute[189296]: 2025-11-28 18:30:32.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:30:32 compute-0 nova_compute[189296]: 2025-11-28 18:30:32.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:30:32 compute-0 nova_compute[189296]: 2025-11-28 18:30:32.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:30:32 compute-0 nova_compute[189296]: 2025-11-28 18:30:32.969 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:33 compute-0 nova_compute[189296]: 2025-11-28 18:30:33.266 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:30:33 compute-0 nova_compute[189296]: 2025-11-28 18:30:33.267 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:30:33 compute-0 nova_compute[189296]: 2025-11-28 18:30:33.267 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:30:33 compute-0 nova_compute[189296]: 2025-11-28 18:30:33.267 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:30:33 compute-0 nova_compute[189296]: 2025-11-28 18:30:33.986 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:34 compute-0 podman[254919]: 2025-11-28 18:30:34.039564706 +0000 UTC m=+0.085238531 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.buildah.version=1.33.7)
Nov 28 18:30:34 compute-0 podman[254920]: 2025-11-28 18:30:34.059095483 +0000 UTC m=+0.108034978 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 28 18:30:34 compute-0 podman[254921]: 2025-11-28 18:30:34.066968976 +0000 UTC m=+0.107913386 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.307 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.331 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.331 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.332 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.363 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.363 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.363 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.364 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.452 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.512 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.513 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.576 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.591 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.650 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.650 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:30:34 compute-0 nova_compute[189296]: 2025-11-28 18:30:34.732 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.096 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.097 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4973MB free_disk=72.24916076660156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.097 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.098 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.183 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.184 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.185 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.186 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.255 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.274 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.277 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:30:35 compute-0 nova_compute[189296]: 2025-11-28 18:30:35.277 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:30:36 compute-0 nova_compute[189296]: 2025-11-28 18:30:36.572 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:30:36 compute-0 nova_compute[189296]: 2025-11-28 18:30:36.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:30:37 compute-0 nova_compute[189296]: 2025-11-28 18:30:37.974 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:38 compute-0 nova_compute[189296]: 2025-11-28 18:30:38.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:30:38 compute-0 nova_compute[189296]: 2025-11-28 18:30:38.987 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:40 compute-0 nova_compute[189296]: 2025-11-28 18:30:40.622 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:30:41 compute-0 podman[254988]: 2025-11-28 18:30:41.020876359 +0000 UTC m=+0.081999443 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:30:41 compute-0 podman[254997]: 2025-11-28 18:30:41.042387514 +0000 UTC m=+0.088783458 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 28 18:30:41 compute-0 podman[254990]: 2025-11-28 18:30:41.048241977 +0000 UTC m=+0.097101371 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0)
Nov 28 18:30:41 compute-0 podman[254989]: 2025-11-28 18:30:41.054450869 +0000 UTC m=+0.109276580 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 28 18:30:42 compute-0 nova_compute[189296]: 2025-11-28 18:30:42.977 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:43 compute-0 nova_compute[189296]: 2025-11-28 18:30:43.989 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:44 compute-0 podman[255066]: 2025-11-28 18:30:44.092237019 +0000 UTC m=+0.134152696 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:30:47 compute-0 nova_compute[189296]: 2025-11-28 18:30:47.980 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:48 compute-0 nova_compute[189296]: 2025-11-28 18:30:48.993 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:50 compute-0 podman[255092]: 2025-11-28 18:30:50.003912819 +0000 UTC m=+0.067153541 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.991 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.991 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.992 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.992 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.993 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:30:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:51.999 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf6c3ac0-6e00-4be5-ae3a-454d022268e8', 'name': 'te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.003 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '200bd8bc-d121-4a86-b728-ea98aac95adf', 'name': 'te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.003 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.003 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.003 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.004 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:30:52.004095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.025 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.026 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.043 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.044 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.044 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.045 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.045 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.045 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.045 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.046 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:30:52.045908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.101 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.bytes volume: 28937216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.101 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.141 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.142 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.142 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.143 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.143 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.144 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.144 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.144 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.latency volume: 612357263 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.145 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.latency volume: 39977629 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:30:52.144228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.145 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 597042360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.145 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 54497620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.146 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.146 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.146 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.147 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.147 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.147 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.148 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:30:52.147541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.151 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.155 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.155 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.156 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.156 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.156 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.156 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.157 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:30:52.156911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.186 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/memory.usage volume: 43.0234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.208 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/memory.usage volume: 42.33984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.208 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.209 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.209 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.209 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.209 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.210 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.210 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:30:52.209936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.210 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.211 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.211 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.212 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.212 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.212 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.212 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.227 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.227 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.227 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:30:52.227272) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.228 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.228 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.228 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.228 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.228 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.228 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.228 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.229 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.229 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.229 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:30:52.228932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.229 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.229 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.229 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.229 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.230 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.230 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.230 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.latency volume: 2902152956 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.230 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.230 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:30:52.230077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.230 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 2414331628 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.231 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.231 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.231 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.231 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.231 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.231 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.232 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.232 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.232 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:30:52.231665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.232 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.233 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.233 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.233 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.233 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.233 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.233 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.234 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.234 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:30:52.233688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.234 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.235 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.235 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.235 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.235 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.235 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.235 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.236 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:30:52.235654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.236 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.237 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.237 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.237 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.237 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.237 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.237 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.238 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/cpu volume: 270120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:30:52.237561) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.238 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/cpu volume: 333980000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.238 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.239 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.239 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.239 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.239 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.239 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.239 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.239 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:30:52.239521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.240 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.240 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.240 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.240 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.241 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.241 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.241 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.242 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.242 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.243 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.243 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.243 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.243 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:30:52.241100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.243 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.244 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:30:52.243523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.245 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.245 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.245 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.245 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.245 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.245 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.246 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.246 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.247 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.247 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.247 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.247 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.247 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.248 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.248 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.248 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:30:52.245809) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.249 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.249 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.249 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.249 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.250 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.250 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.250 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.250 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.251 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.251 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.251 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.251 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.251 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.251 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.251 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:30:52.247736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.252 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:30:52.250138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.252 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.252 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.253 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.253 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.253 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.254 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.254 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.254 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.254 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.254 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:30:52.251955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.254 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.255 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.255 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.256 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.256 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.256 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.256 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.256 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.256 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.256 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.257 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.257 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.257 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.257 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.257 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.257 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.256 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:30:52.254373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.258 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.258 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.258 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.258 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.258 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.259 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.259 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.259 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.259 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.259 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.requests volume: 1041 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.259 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.260 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.260 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.260 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:30:52.256420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.261 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:30:52.257763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:30:52.259253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.262 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.263 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:30:52.264 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:30:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:30:52.644 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:30:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:30:52.645 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:30:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:30:52.645 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:30:52 compute-0 nova_compute[189296]: 2025-11-28 18:30:52.982 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:53 compute-0 nova_compute[189296]: 2025-11-28 18:30:53.995 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:57 compute-0 nova_compute[189296]: 2025-11-28 18:30:57.985 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:58 compute-0 nova_compute[189296]: 2025-11-28 18:30:58.999 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:30:59 compute-0 podman[203494]: time="2025-11-28T18:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:30:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:30:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Nov 28 18:31:01 compute-0 openstack_network_exporter[205632]: ERROR   18:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:31:01 compute-0 openstack_network_exporter[205632]: ERROR   18:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:31:01 compute-0 openstack_network_exporter[205632]: ERROR   18:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:31:01 compute-0 openstack_network_exporter[205632]: ERROR   18:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:31:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:31:01 compute-0 openstack_network_exporter[205632]: ERROR   18:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:31:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:31:02 compute-0 nova_compute[189296]: 2025-11-28 18:31:02.988 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:04 compute-0 nova_compute[189296]: 2025-11-28 18:31:04.003 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:05 compute-0 podman[255117]: 2025-11-28 18:31:05.046331365 +0000 UTC m=+0.092801366 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, distribution-scope=public)
Nov 28 18:31:05 compute-0 podman[255118]: 2025-11-28 18:31:05.0576054 +0000 UTC m=+0.107885665 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:31:05 compute-0 podman[255119]: 2025-11-28 18:31:05.063135535 +0000 UTC m=+0.101697924 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:31:07 compute-0 nova_compute[189296]: 2025-11-28 18:31:07.991 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:09 compute-0 nova_compute[189296]: 2025-11-28 18:31:09.006 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:12 compute-0 podman[255172]: 2025-11-28 18:31:12.026757605 +0000 UTC m=+0.079668126 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:31:12 compute-0 podman[255179]: 2025-11-28 18:31:12.045678628 +0000 UTC m=+0.081431819 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 28 18:31:12 compute-0 podman[255173]: 2025-11-28 18:31:12.05480345 +0000 UTC m=+0.100143935 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 28 18:31:12 compute-0 podman[255174]: 2025-11-28 18:31:12.057092516 +0000 UTC m=+0.092389056 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, vcs-type=git, container_name=kepler, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0)
Nov 28 18:31:12 compute-0 nova_compute[189296]: 2025-11-28 18:31:12.995 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:14 compute-0 nova_compute[189296]: 2025-11-28 18:31:14.008 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:14 compute-0 podman[255247]: 2025-11-28 18:31:14.778474863 +0000 UTC m=+0.107464725 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:31:17 compute-0 nova_compute[189296]: 2025-11-28 18:31:17.998 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:19 compute-0 nova_compute[189296]: 2025-11-28 18:31:19.010 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:21 compute-0 podman[255271]: 2025-11-28 18:31:21.006580177 +0000 UTC m=+0.073291380 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 18:31:23 compute-0 nova_compute[189296]: 2025-11-28 18:31:23.000 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:24 compute-0 nova_compute[189296]: 2025-11-28 18:31:24.013 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:24 compute-0 nova_compute[189296]: 2025-11-28 18:31:24.647 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:31:25 compute-0 nova_compute[189296]: 2025-11-28 18:31:25.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:31:28 compute-0 nova_compute[189296]: 2025-11-28 18:31:28.003 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:28 compute-0 nova_compute[189296]: 2025-11-28 18:31:28.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:31:29 compute-0 nova_compute[189296]: 2025-11-28 18:31:29.016 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:29 compute-0 podman[203494]: time="2025-11-28T18:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:31:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:31:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 28 18:31:31 compute-0 openstack_network_exporter[205632]: ERROR   18:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:31:31 compute-0 openstack_network_exporter[205632]: ERROR   18:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:31:31 compute-0 openstack_network_exporter[205632]: ERROR   18:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:31:31 compute-0 openstack_network_exporter[205632]: ERROR   18:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:31:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:31:31 compute-0 openstack_network_exporter[205632]: ERROR   18:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:31:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:31:31 compute-0 nova_compute[189296]: 2025-11-28 18:31:31.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:31:31 compute-0 nova_compute[189296]: 2025-11-28 18:31:31.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:31:32 compute-0 nova_compute[189296]: 2025-11-28 18:31:32.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:31:32 compute-0 nova_compute[189296]: 2025-11-28 18:31:32.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:31:33 compute-0 nova_compute[189296]: 2025-11-28 18:31:33.008 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:33 compute-0 nova_compute[189296]: 2025-11-28 18:31:33.262 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:31:33 compute-0 nova_compute[189296]: 2025-11-28 18:31:33.265 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:31:33 compute-0 nova_compute[189296]: 2025-11-28 18:31:33.265 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.023 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.191 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updating instance_info_cache with network_info: [{"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.213 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.214 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.672 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.672 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.673 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.673 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.809 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.870 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.872 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.929 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.941 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.998 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:31:34 compute-0 nova_compute[189296]: 2025-11-28 18:31:34.999 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.054 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.351 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.356 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4992MB free_disk=72.24921798706055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.356 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.357 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.502 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.502 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.503 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.503 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.521 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing inventories for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.546 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating ProviderTree inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.546 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.567 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing aggregate associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.587 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing trait associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, traits: HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.675 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.749 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.752 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:31:35 compute-0 nova_compute[189296]: 2025-11-28 18:31:35.752 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:31:36 compute-0 podman[255307]: 2025-11-28 18:31:36.016686543 +0000 UTC m=+0.078360374 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 28 18:31:36 compute-0 podman[255308]: 2025-11-28 18:31:36.060583266 +0000 UTC m=+0.109073694 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 28 18:31:36 compute-0 podman[255309]: 2025-11-28 18:31:36.067305429 +0000 UTC m=+0.104875350 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 28 18:31:37 compute-0 nova_compute[189296]: 2025-11-28 18:31:37.753 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:31:38 compute-0 nova_compute[189296]: 2025-11-28 18:31:38.013 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:38 compute-0 nova_compute[189296]: 2025-11-28 18:31:38.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:31:39 compute-0 nova_compute[189296]: 2025-11-28 18:31:39.027 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:39 compute-0 nova_compute[189296]: 2025-11-28 18:31:39.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:31:43 compute-0 nova_compute[189296]: 2025-11-28 18:31:43.018 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:43 compute-0 podman[255365]: 2025-11-28 18:31:43.019506512 +0000 UTC m=+0.081955293 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 28 18:31:43 compute-0 podman[255367]: 2025-11-28 18:31:43.046522421 +0000 UTC m=+0.099299605 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 28 18:31:43 compute-0 podman[255373]: 2025-11-28 18:31:43.061383724 +0000 UTC m=+0.099919121 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Nov 28 18:31:43 compute-0 podman[255366]: 2025-11-28 18:31:43.0677832 +0000 UTC m=+0.117079389 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 28 18:31:44 compute-0 nova_compute[189296]: 2025-11-28 18:31:44.029 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:45 compute-0 podman[255436]: 2025-11-28 18:31:45.042461277 +0000 UTC m=+0.109263188 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:31:48 compute-0 nova_compute[189296]: 2025-11-28 18:31:48.021 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:49 compute-0 nova_compute[189296]: 2025-11-28 18:31:49.036 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:52 compute-0 podman[255463]: 2025-11-28 18:31:52.016234575 +0000 UTC m=+0.074271194 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:31:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:31:52.646 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:31:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:31:52.647 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:31:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:31:52.648 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:31:53 compute-0 nova_compute[189296]: 2025-11-28 18:31:53.023 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:54 compute-0 nova_compute[189296]: 2025-11-28 18:31:54.038 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:58 compute-0 nova_compute[189296]: 2025-11-28 18:31:58.027 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:59 compute-0 nova_compute[189296]: 2025-11-28 18:31:59.040 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:31:59 compute-0 podman[203494]: time="2025-11-28T18:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:31:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:31:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 28 18:32:01 compute-0 openstack_network_exporter[205632]: ERROR   18:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:32:01 compute-0 openstack_network_exporter[205632]: ERROR   18:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:32:01 compute-0 openstack_network_exporter[205632]: ERROR   18:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:32:01 compute-0 openstack_network_exporter[205632]: ERROR   18:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:32:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:32:01 compute-0 openstack_network_exporter[205632]: ERROR   18:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:32:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:32:03 compute-0 nova_compute[189296]: 2025-11-28 18:32:03.032 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:04 compute-0 nova_compute[189296]: 2025-11-28 18:32:04.043 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:07 compute-0 podman[255491]: 2025-11-28 18:32:07.046506942 +0000 UTC m=+0.080409223 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 28 18:32:07 compute-0 podman[255490]: 2025-11-28 18:32:07.056272451 +0000 UTC m=+0.087747153 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Nov 28 18:32:07 compute-0 podman[255489]: 2025-11-28 18:32:07.085021003 +0000 UTC m=+0.122149643 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal)
Nov 28 18:32:08 compute-0 nova_compute[189296]: 2025-11-28 18:32:08.035 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:09 compute-0 nova_compute[189296]: 2025-11-28 18:32:09.046 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:13 compute-0 nova_compute[189296]: 2025-11-28 18:32:13.040 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:14 compute-0 podman[255548]: 2025-11-28 18:32:14.039704888 +0000 UTC m=+0.078341144 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:32:14 compute-0 nova_compute[189296]: 2025-11-28 18:32:14.050 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:14 compute-0 podman[255547]: 2025-11-28 18:32:14.051656339 +0000 UTC m=+0.098005694 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:32:14 compute-0 podman[255550]: 2025-11-28 18:32:14.067853855 +0000 UTC m=+0.102401381 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:32:14 compute-0 podman[255549]: 2025-11-28 18:32:14.078672789 +0000 UTC m=+0.104813160 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 18:32:16 compute-0 podman[255624]: 2025-11-28 18:32:16.075575135 +0000 UTC m=+0.136041702 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 28 18:32:18 compute-0 nova_compute[189296]: 2025-11-28 18:32:18.043 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:19 compute-0 nova_compute[189296]: 2025-11-28 18:32:19.051 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:23 compute-0 podman[255651]: 2025-11-28 18:32:23.026188122 +0000 UTC m=+0.080960087 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 28 18:32:23 compute-0 nova_compute[189296]: 2025-11-28 18:32:23.046 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:24 compute-0 nova_compute[189296]: 2025-11-28 18:32:24.055 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:26 compute-0 nova_compute[189296]: 2025-11-28 18:32:26.622 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:32:26 compute-0 nova_compute[189296]: 2025-11-28 18:32:26.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:32:28 compute-0 nova_compute[189296]: 2025-11-28 18:32:28.049 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:29 compute-0 nova_compute[189296]: 2025-11-28 18:32:29.057 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:29 compute-0 podman[203494]: time="2025-11-28T18:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:32:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:32:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 28 18:32:30 compute-0 nova_compute[189296]: 2025-11-28 18:32:30.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:32:31 compute-0 openstack_network_exporter[205632]: ERROR   18:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:32:31 compute-0 openstack_network_exporter[205632]: ERROR   18:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:32:31 compute-0 openstack_network_exporter[205632]: ERROR   18:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:32:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:32:31 compute-0 openstack_network_exporter[205632]: ERROR   18:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:32:31 compute-0 openstack_network_exporter[205632]: ERROR   18:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:32:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:32:31 compute-0 nova_compute[189296]: 2025-11-28 18:32:31.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:32:31 compute-0 nova_compute[189296]: 2025-11-28 18:32:31.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:32:32 compute-0 nova_compute[189296]: 2025-11-28 18:32:32.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:32:32 compute-0 nova_compute[189296]: 2025-11-28 18:32:32.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:32:32 compute-0 nova_compute[189296]: 2025-11-28 18:32:32.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:32:33 compute-0 nova_compute[189296]: 2025-11-28 18:32:33.052 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:33 compute-0 nova_compute[189296]: 2025-11-28 18:32:33.252 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:32:33 compute-0 nova_compute[189296]: 2025-11-28 18:32:33.253 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:32:33 compute-0 nova_compute[189296]: 2025-11-28 18:32:33.254 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:32:33 compute-0 nova_compute[189296]: 2025-11-28 18:32:33.255 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:32:34 compute-0 nova_compute[189296]: 2025-11-28 18:32:34.060 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:34 compute-0 nova_compute[189296]: 2025-11-28 18:32:34.359 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:32:34 compute-0 nova_compute[189296]: 2025-11-28 18:32:34.376 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:32:34 compute-0 nova_compute[189296]: 2025-11-28 18:32:34.377 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.653 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.654 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.655 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.656 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.769 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.865 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.866 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.923 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:32:36 compute-0 nova_compute[189296]: 2025-11-28 18:32:36.931 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.013 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.014 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.111 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.519 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.520 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4986MB free_disk=72.24916076660156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.521 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.521 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.590 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.591 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.591 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.591 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.650 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.662 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.664 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:32:37 compute-0 nova_compute[189296]: 2025-11-28 18:32:37.664 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:32:38 compute-0 podman[255692]: 2025-11-28 18:32:38.016196329 +0000 UTC m=+0.073493250 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 18:32:38 compute-0 podman[255691]: 2025-11-28 18:32:38.039887086 +0000 UTC m=+0.101720059 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 28 18:32:38 compute-0 nova_compute[189296]: 2025-11-28 18:32:38.056 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:38 compute-0 podman[255693]: 2025-11-28 18:32:38.056798418 +0000 UTC m=+0.094340878 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:32:39 compute-0 nova_compute[189296]: 2025-11-28 18:32:39.063 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:40 compute-0 nova_compute[189296]: 2025-11-28 18:32:40.663 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:32:40 compute-0 nova_compute[189296]: 2025-11-28 18:32:40.664 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:32:42 compute-0 nova_compute[189296]: 2025-11-28 18:32:42.621 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:32:43 compute-0 nova_compute[189296]: 2025-11-28 18:32:43.060 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:44 compute-0 nova_compute[189296]: 2025-11-28 18:32:44.064 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:44 compute-0 podman[255751]: 2025-11-28 18:32:44.811447922 +0000 UTC m=+0.089625374 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 18:32:44 compute-0 podman[255752]: 2025-11-28 18:32:44.820702738 +0000 UTC m=+0.078372980 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 28 18:32:44 compute-0 podman[255753]: 2025-11-28 18:32:44.833182332 +0000 UTC m=+0.098598512 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, architecture=x86_64, container_name=kepler, vendor=Red Hat, Inc., name=ubi9, managed_by=edpm_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 28 18:32:44 compute-0 podman[255760]: 2025-11-28 18:32:44.874450927 +0000 UTC m=+0.112633054 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:32:47 compute-0 podman[255828]: 2025-11-28 18:32:47.067388728 +0000 UTC m=+0.126186434 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 28 18:32:48 compute-0 nova_compute[189296]: 2025-11-28 18:32:48.065 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:49 compute-0 nova_compute[189296]: 2025-11-28 18:32:49.068 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.993 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.993 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.994 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.994 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.996 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc141da4530>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.005 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf6c3ac0-6e00-4be5-ae3a-454d022268e8', 'name': 'te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.011 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '200bd8bc-d121-4a86-b728-ea98aac95adf', 'name': 'te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.012 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.012 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.013 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.013 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:32:52.013255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.038 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.039 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.068 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.069 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.070 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.070 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.070 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.071 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.071 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.071 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:32:52.071573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.136 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.bytes volume: 30165504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.137 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.203 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.203 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.204 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.204 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.204 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.204 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.204 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.205 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.205 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.latency volume: 631164918 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.205 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.latency volume: 45942895 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.205 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 597042360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.206 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 54497620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.206 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.207 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.207 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.207 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.207 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:32:52.205010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.207 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:32:52.207852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.212 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.216 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.216 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.217 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.217 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.217 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.217 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.217 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:32:52.217543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.242 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/memory.usage volume: 42.46875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.279 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/memory.usage volume: 42.33984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.279 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.280 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.280 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.280 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.280 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.280 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.280 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.281 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.281 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.281 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:32:52.280685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.282 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.282 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.282 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.282 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.282 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.282 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.283 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.283 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.283 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.283 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.284 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.284 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.284 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.284 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.284 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.285 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.285 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.285 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.285 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.286 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.286 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.286 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.286 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.286 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.286 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.latency volume: 2934343936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.286 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.287 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 2414331628 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.287 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.288 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.288 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.288 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.288 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.288 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.288 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.288 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.requests volume: 345 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.289 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.289 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.289 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.290 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.290 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.290 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.290 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.290 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.290 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.290 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.291 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.291 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.291 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.291 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.292 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.292 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.292 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.292 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.292 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.293 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.293 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.293 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.293 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.293 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.293 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.294 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/cpu volume: 333290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.294 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/cpu volume: 335490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.294 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.294 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.295 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.295 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.295 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.295 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:32:52.282854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.296 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:32:52.285035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.296 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:32:52.286529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:32:52.288699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:32:52.290713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:32:52.292310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:32:52.293907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.296 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.297 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.297 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.297 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:32:52.296355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.297 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.297 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.297 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.298 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.298 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.298 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.298 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.298 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.298 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.299 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.299 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.299 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.299 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.300 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.300 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.300 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.300 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.300 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.301 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.301 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:32:52.297639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.301 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.301 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:32:52.299084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.301 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:32:52.300234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.302 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.302 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.302 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.302 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.303 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.303 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:32:52.301965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.303 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.303 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.303 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.303 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.304 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.304 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.304 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.305 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.305 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.305 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.305 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.306 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:32:52.303716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.306 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.306 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.306 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.307 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.307 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.307 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.307 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:32:52.306266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.307 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.307 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.307 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.308 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.308 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.308 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.308 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.309 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.309 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.309 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.309 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.309 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.309 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:32:52.308067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:32:52.309559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.310 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.310 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.310 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.310 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.311 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.311 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.311 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.311 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.311 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.312 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.312 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.312 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.312 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.312 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.312 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.313 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.requests volume: 1088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.313 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:32:52.311303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.313 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:32:52.312777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.313 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.314 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.314 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.315 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.315 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.315 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.315 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.315 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.315 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.315 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.315 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.315 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.315 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.316 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.317 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.317 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.317 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:32:52.317 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:32:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:32:52.648 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:32:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:32:52.649 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:32:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:32:52.650 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:32:53 compute-0 nova_compute[189296]: 2025-11-28 18:32:53.069 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:54 compute-0 podman[255854]: 2025-11-28 18:32:54.067925202 +0000 UTC m=+0.114419818 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:32:54 compute-0 nova_compute[189296]: 2025-11-28 18:32:54.071 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:58 compute-0 nova_compute[189296]: 2025-11-28 18:32:58.080 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:59 compute-0 nova_compute[189296]: 2025-11-28 18:32:59.073 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:32:59 compute-0 podman[203494]: time="2025-11-28T18:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:32:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:32:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 28 18:33:01 compute-0 openstack_network_exporter[205632]: ERROR   18:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:33:01 compute-0 openstack_network_exporter[205632]: ERROR   18:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:33:01 compute-0 openstack_network_exporter[205632]: ERROR   18:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:33:01 compute-0 openstack_network_exporter[205632]: ERROR   18:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:33:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:33:01 compute-0 openstack_network_exporter[205632]: ERROR   18:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:33:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:33:03 compute-0 nova_compute[189296]: 2025-11-28 18:33:03.087 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:04 compute-0 nova_compute[189296]: 2025-11-28 18:33:04.076 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:08 compute-0 nova_compute[189296]: 2025-11-28 18:33:08.092 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:09 compute-0 podman[255880]: 2025-11-28 18:33:09.015601097 +0000 UTC m=+0.074866044 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:33:09 compute-0 podman[255879]: 2025-11-28 18:33:09.015910535 +0000 UTC m=+0.078700279 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 28 18:33:09 compute-0 podman[255878]: 2025-11-28 18:33:09.03871524 +0000 UTC m=+0.104805484 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Nov 28 18:33:09 compute-0 nova_compute[189296]: 2025-11-28 18:33:09.078 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:13 compute-0 nova_compute[189296]: 2025-11-28 18:33:13.096 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:14 compute-0 nova_compute[189296]: 2025-11-28 18:33:14.081 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:15 compute-0 podman[255936]: 2025-11-28 18:33:15.056358714 +0000 UTC m=+0.100598621 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 18:33:15 compute-0 podman[255937]: 2025-11-28 18:33:15.068441748 +0000 UTC m=+0.096630345 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.buildah.version=1.29.0)
Nov 28 18:33:15 compute-0 podman[255938]: 2025-11-28 18:33:15.06974581 +0000 UTC m=+0.109223832 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 18:33:15 compute-0 podman[255935]: 2025-11-28 18:33:15.087198675 +0000 UTC m=+0.126327487 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:33:18 compute-0 nova_compute[189296]: 2025-11-28 18:33:18.099 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:18 compute-0 podman[256011]: 2025-11-28 18:33:18.173590387 +0000 UTC m=+0.220122422 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:33:19 compute-0 nova_compute[189296]: 2025-11-28 18:33:19.082 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:23 compute-0 nova_compute[189296]: 2025-11-28 18:33:23.102 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:24 compute-0 nova_compute[189296]: 2025-11-28 18:33:24.084 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:25 compute-0 podman[256038]: 2025-11-28 18:33:25.025692862 +0000 UTC m=+0.087760428 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 28 18:33:26 compute-0 nova_compute[189296]: 2025-11-28 18:33:26.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:28 compute-0 nova_compute[189296]: 2025-11-28 18:33:28.106 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:28 compute-0 nova_compute[189296]: 2025-11-28 18:33:28.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:29 compute-0 nova_compute[189296]: 2025-11-28 18:33:29.086 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:29 compute-0 podman[203494]: time="2025-11-28T18:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:33:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:33:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4789 "" "Go-http-client/1.1"
Nov 28 18:33:31 compute-0 openstack_network_exporter[205632]: ERROR   18:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:33:31 compute-0 openstack_network_exporter[205632]: ERROR   18:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:33:31 compute-0 openstack_network_exporter[205632]: ERROR   18:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:33:31 compute-0 openstack_network_exporter[205632]: ERROR   18:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:33:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:33:31 compute-0 openstack_network_exporter[205632]: ERROR   18:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:33:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:33:31 compute-0 nova_compute[189296]: 2025-11-28 18:33:31.630 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:32 compute-0 nova_compute[189296]: 2025-11-28 18:33:32.631 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:32 compute-0 nova_compute[189296]: 2025-11-28 18:33:32.632 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:33:33 compute-0 nova_compute[189296]: 2025-11-28 18:33:33.109 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:33 compute-0 nova_compute[189296]: 2025-11-28 18:33:33.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:33 compute-0 nova_compute[189296]: 2025-11-28 18:33:33.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:33:33 compute-0 nova_compute[189296]: 2025-11-28 18:33:33.989 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:33:33 compute-0 nova_compute[189296]: 2025-11-28 18:33:33.990 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:33:33 compute-0 nova_compute[189296]: 2025-11-28 18:33:33.990 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:33:34 compute-0 nova_compute[189296]: 2025-11-28 18:33:34.089 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:35 compute-0 nova_compute[189296]: 2025-11-28 18:33:35.888 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updating instance_info_cache with network_info: [{"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:33:35 compute-0 nova_compute[189296]: 2025-11-28 18:33:35.905 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:33:35 compute-0 nova_compute[189296]: 2025-11-28 18:33:35.906 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:33:35 compute-0 nova_compute[189296]: 2025-11-28 18:33:35.907 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.634 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.635 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.666 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.667 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.668 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.669 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.796 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.882 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.884 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.984 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:33:37 compute-0 nova_compute[189296]: 2025-11-28 18:33:37.992 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.056 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.058 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.113 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.151 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.467 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.468 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4983MB free_disk=72.24920654296875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.468 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.469 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.606 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.607 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.608 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.609 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.762 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.783 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.785 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:33:38 compute-0 nova_compute[189296]: 2025-11-28 18:33:38.785 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:33:39 compute-0 nova_compute[189296]: 2025-11-28 18:33:39.092 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:40 compute-0 podman[256074]: 2025-11-28 18:33:40.076845687 +0000 UTC m=+0.115548646 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Nov 28 18:33:40 compute-0 podman[256075]: 2025-11-28 18:33:40.080909496 +0000 UTC m=+0.116727214 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:33:40 compute-0 podman[256073]: 2025-11-28 18:33:40.08561245 +0000 UTC m=+0.132473887 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Nov 28 18:33:40 compute-0 nova_compute[189296]: 2025-11-28 18:33:40.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:40 compute-0 nova_compute[189296]: 2025-11-28 18:33:40.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:40 compute-0 nova_compute[189296]: 2025-11-28 18:33:40.628 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:40 compute-0 nova_compute[189296]: 2025-11-28 18:33:40.628 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 28 18:33:43 compute-0 nova_compute[189296]: 2025-11-28 18:33:43.116 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:44 compute-0 nova_compute[189296]: 2025-11-28 18:33:44.095 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:46 compute-0 podman[256131]: 2025-11-28 18:33:46.048722854 +0000 UTC m=+0.085697848 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 28 18:33:46 compute-0 podman[256130]: 2025-11-28 18:33:46.054083705 +0000 UTC m=+0.101292469 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:33:46 compute-0 podman[256135]: 2025-11-28 18:33:46.06660654 +0000 UTC m=+0.095522767 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=edpm, vcs-type=git, container_name=kepler, release=1214.1726694543, managed_by=edpm_ansible)
Nov 28 18:33:46 compute-0 podman[256137]: 2025-11-28 18:33:46.087560641 +0000 UTC m=+0.113753433 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:33:46 compute-0 nova_compute[189296]: 2025-11-28 18:33:46.649 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:33:46 compute-0 nova_compute[189296]: 2025-11-28 18:33:46.650 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 28 18:33:46 compute-0 nova_compute[189296]: 2025-11-28 18:33:46.676 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 28 18:33:48 compute-0 nova_compute[189296]: 2025-11-28 18:33:48.119 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:49 compute-0 podman[256208]: 2025-11-28 18:33:49.069839457 +0000 UTC m=+0.132377195 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 28 18:33:49 compute-0 nova_compute[189296]: 2025-11-28 18:33:49.098 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:33:52.650 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:33:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:33:52.651 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:33:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:33:52.652 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:33:53 compute-0 nova_compute[189296]: 2025-11-28 18:33:53.124 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:54 compute-0 nova_compute[189296]: 2025-11-28 18:33:54.102 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:56 compute-0 podman[256234]: 2025-11-28 18:33:56.05809613 +0000 UTC m=+0.101442802 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:33:58 compute-0 nova_compute[189296]: 2025-11-28 18:33:58.128 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:59 compute-0 nova_compute[189296]: 2025-11-28 18:33:59.105 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:33:59 compute-0 podman[203494]: time="2025-11-28T18:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:33:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:33:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4789 "" "Go-http-client/1.1"
Nov 28 18:34:01 compute-0 openstack_network_exporter[205632]: ERROR   18:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:34:01 compute-0 openstack_network_exporter[205632]: ERROR   18:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:34:01 compute-0 openstack_network_exporter[205632]: ERROR   18:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:34:01 compute-0 openstack_network_exporter[205632]: ERROR   18:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:34:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:34:01 compute-0 openstack_network_exporter[205632]: ERROR   18:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:34:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:34:03 compute-0 nova_compute[189296]: 2025-11-28 18:34:03.133 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:04 compute-0 nova_compute[189296]: 2025-11-28 18:34:04.107 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:08 compute-0 nova_compute[189296]: 2025-11-28 18:34:08.135 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:09 compute-0 nova_compute[189296]: 2025-11-28 18:34:09.110 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:11 compute-0 podman[256257]: 2025-11-28 18:34:11.073616001 +0000 UTC m=+0.111091637 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 28 18:34:11 compute-0 podman[256255]: 2025-11-28 18:34:11.086179017 +0000 UTC m=+0.131264459 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-type=git, version=9.6, config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 28 18:34:11 compute-0 podman[256256]: 2025-11-28 18:34:11.096256062 +0000 UTC m=+0.139078208 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=f26160204c78771e78cdd2489258319b, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 28 18:34:13 compute-0 nova_compute[189296]: 2025-11-28 18:34:13.138 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:14 compute-0 nova_compute[189296]: 2025-11-28 18:34:14.111 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:17 compute-0 podman[256310]: 2025-11-28 18:34:17.072309681 +0000 UTC m=+0.107391845 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:34:17 compute-0 podman[256312]: 2025-11-28 18:34:17.075158412 +0000 UTC m=+0.113518287 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, version=9.4, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Nov 28 18:34:17 compute-0 podman[256311]: 2025-11-28 18:34:17.080260355 +0000 UTC m=+0.115313919 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:34:17 compute-0 podman[256313]: 2025-11-28 18:34:17.092688518 +0000 UTC m=+0.110313938 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 28 18:34:18 compute-0 nova_compute[189296]: 2025-11-28 18:34:18.141 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:19 compute-0 nova_compute[189296]: 2025-11-28 18:34:19.115 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:20 compute-0 podman[256385]: 2025-11-28 18:34:20.101675998 +0000 UTC m=+0.156994104 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 28 18:34:23 compute-0 nova_compute[189296]: 2025-11-28 18:34:23.146 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:24 compute-0 nova_compute[189296]: 2025-11-28 18:34:24.119 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:27 compute-0 podman[256410]: 2025-11-28 18:34:27.047316931 +0000 UTC m=+0.099055044 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:34:27 compute-0 nova_compute[189296]: 2025-11-28 18:34:27.652 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:34:28 compute-0 nova_compute[189296]: 2025-11-28 18:34:28.153 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:28 compute-0 nova_compute[189296]: 2025-11-28 18:34:28.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:34:29 compute-0 nova_compute[189296]: 2025-11-28 18:34:29.121 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:29 compute-0 podman[203494]: time="2025-11-28T18:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:34:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:34:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Nov 28 18:34:31 compute-0 openstack_network_exporter[205632]: ERROR   18:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:34:31 compute-0 openstack_network_exporter[205632]: ERROR   18:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:34:31 compute-0 openstack_network_exporter[205632]: ERROR   18:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:34:31 compute-0 openstack_network_exporter[205632]: ERROR   18:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:34:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:34:31 compute-0 openstack_network_exporter[205632]: ERROR   18:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:34:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:34:31 compute-0 nova_compute[189296]: 2025-11-28 18:34:31.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:34:33 compute-0 nova_compute[189296]: 2025-11-28 18:34:33.158 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:33 compute-0 nova_compute[189296]: 2025-11-28 18:34:33.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:34:33 compute-0 nova_compute[189296]: 2025-11-28 18:34:33.626 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:34:34 compute-0 nova_compute[189296]: 2025-11-28 18:34:34.123 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:35 compute-0 nova_compute[189296]: 2025-11-28 18:34:35.633 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:34:35 compute-0 nova_compute[189296]: 2025-11-28 18:34:35.637 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:34:35 compute-0 nova_compute[189296]: 2025-11-28 18:34:35.638 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:34:36 compute-0 nova_compute[189296]: 2025-11-28 18:34:36.887 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:34:36 compute-0 nova_compute[189296]: 2025-11-28 18:34:36.888 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:34:36 compute-0 nova_compute[189296]: 2025-11-28 18:34:36.888 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:34:36 compute-0 nova_compute[189296]: 2025-11-28 18:34:36.888 189300 DEBUG nova.objects.instance [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lazy-loading 'info_cache' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:34:38 compute-0 nova_compute[189296]: 2025-11-28 18:34:38.163 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.126 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.678 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [{"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.697 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-200bd8bc-d121-4a86-b728-ea98aac95adf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.698 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.698 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.699 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.726 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.727 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.728 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.728 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.837 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.918 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:34:39 compute-0 nova_compute[189296]: 2025-11-28 18:34:39.919 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.020 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.029 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.107 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.108 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.176 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.604 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.607 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4988MB free_disk=72.24920654296875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.608 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.608 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.685 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.686 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.686 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.686 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.865 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.885 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.887 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:34:40 compute-0 nova_compute[189296]: 2025-11-28 18:34:40.888 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:34:41 compute-0 nova_compute[189296]: 2025-11-28 18:34:41.815 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:34:42 compute-0 podman[256447]: 2025-11-28 18:34:42.067595575 +0000 UTC m=+0.104307991 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 28 18:34:42 compute-0 podman[256445]: 2025-11-28 18:34:42.06861978 +0000 UTC m=+0.115181776 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, distribution-scope=public, version=9.6, io.openshift.expose-services=)
Nov 28 18:34:42 compute-0 podman[256446]: 2025-11-28 18:34:42.092232285 +0000 UTC m=+0.131211187 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Nov 28 18:34:42 compute-0 nova_compute[189296]: 2025-11-28 18:34:42.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:34:43 compute-0 nova_compute[189296]: 2025-11-28 18:34:43.166 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:44 compute-0 nova_compute[189296]: 2025-11-28 18:34:44.129 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:47 compute-0 nova_compute[189296]: 2025-11-28 18:34:47.620 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:34:48 compute-0 podman[256504]: 2025-11-28 18:34:48.058030547 +0000 UTC m=+0.099626887 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 18:34:48 compute-0 podman[256505]: 2025-11-28 18:34:48.078726972 +0000 UTC m=+0.104291882 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 28 18:34:48 compute-0 podman[256502]: 2025-11-28 18:34:48.081642022 +0000 UTC m=+0.124976334 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:34:48 compute-0 podman[256503]: 2025-11-28 18:34:48.09672302 +0000 UTC m=+0.134386084 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:34:48 compute-0 nova_compute[189296]: 2025-11-28 18:34:48.168 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:49 compute-0 nova_compute[189296]: 2025-11-28 18:34:49.132 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:51 compute-0 podman[256580]: 2025-11-28 18:34:51.098713394 +0000 UTC m=+0.141684703 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.993 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.995 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.996 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.997 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.998 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.002 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.002 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.002 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc145bdb620>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.007 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf6c3ac0-6e00-4be5-ae3a-454d022268e8', 'name': 'te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.011 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '200bd8bc-d121-4a86-b728-ea98aac95adf', 'name': 'te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw', 'flavor': {'id': 'b177f611-8f79-4bfd-9a12-e83e9545757b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '7d5268e2-45b5-44b2-b3c1-3da9b27b258e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '4c71a276f38f4bfebf1d3631d6f82966', 'user_id': 'c1f6c07dc6c5400cbf4fa724992b16d3', 'hostId': 'd63a60f107fb9172c58f42464c0d0697d316dd72980345b387d4da6d', 'status': 'active', 'metadata': {'metering.server_group': 'a12ef97f-9351-448f-95c7-ab90e2c7b098'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.012 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.012 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.012 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.013 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-28T18:34:52.012860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.034 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.035 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.051 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.052 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.053 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.053 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.053 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.053 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.053 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.054 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-28T18:34:52.054196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.106 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.bytes volume: 30165504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.107 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.142 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.143 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.144 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.144 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.144 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.145 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.145 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.145 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.latency volume: 631164918 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-28T18:34:52.145164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.146 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.latency volume: 45942895 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.146 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 597042360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.146 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.latency volume: 54497620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.147 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.147 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.148 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.148 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.148 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.148 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-28T18:34:52.148529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.153 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.157 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.158 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.158 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.159 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.159 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-28T18:34:52.159459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.159 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.186 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/memory.usage volume: 42.46875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.204 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/memory.usage volume: 42.3515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.204 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.205 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.205 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.205 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.205 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.205 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-28T18:34:52.205458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.206 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.206 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.206 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.206 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.207 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.207 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.207 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.207 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.207 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.207 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-28T18:34:52.207830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.208 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.208 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.208 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.209 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.209 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.209 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.209 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.209 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.209 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.210 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-28T18:34:52.210055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.210 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.210 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.211 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.211 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.211 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.211 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.211 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-28T18:34:52.211647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.212 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.latency volume: 2934343936 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.212 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.212 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 2414331628 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.212 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.213 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.213 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.213 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.213 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.214 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-28T18:34:52.213937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.214 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.requests volume: 345 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.214 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.214 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.215 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.215 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.215 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.215 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.215 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.215 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.215 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-28T18:34:52.215863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.216 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.216 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.216 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.216 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.217 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.217 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.217 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.217 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-28T18:34:52.217206) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.217 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.217 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.218 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.218 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.218 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.218 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.218 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.218 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-28T18:34:52.218547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.219 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/cpu volume: 334870000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.219 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/cpu volume: 337050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.219 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.219 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.219 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.219 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.219 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.219 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.219 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.220 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-28T18:34:52.220040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.220 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.220 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.220 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.220 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.221 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.221 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-28T18:34:52.221069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.221 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.221 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.221 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.222 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.222 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.222 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.222 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.222 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-28T18:34:52.222439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.223 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.223 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.223 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.223 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.223 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.223 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.223 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-28T18:34:52.223552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.224 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.224 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.224 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.224 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.224 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.224 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.224 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.225 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-28T18:34:52.224770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.225 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.225 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.225 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.225 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.225 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.225 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.226 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-28T18:34:52.226024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.226 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.226 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.226 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.227 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.227 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.227 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.227 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.227 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.227 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.227 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.228 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-28T18:34:52.227708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.228 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.228 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.228 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.228 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.228 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.228 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.229 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.229 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.229 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-28T18:34:52.229266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.229 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.230 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.230 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.230 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.230 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.231 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.231 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.231 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-28T18:34:52.231208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.231 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.232 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.232 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.232 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.232 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.232 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.232 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.233 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-28T18:34:52.233035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.233 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.234 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.234 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.234 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.234 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.234 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.234 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.235 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-28T18:34:52.235059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.235 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.requests volume: 1088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.236 15 DEBUG ceilometer.compute.pollsters [-] bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.236 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.236 15 DEBUG ceilometer.compute.pollsters [-] 200bd8bc-d121-4a86-b728-ea98aac95adf/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.237 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.237 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.237 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.237 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.238 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.238 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.238 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.238 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.238 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.238 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.239 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.239 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.239 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.239 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.239 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.239 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.240 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:34:52.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:34:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:34:52.651 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:34:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:34:52.652 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:34:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:34:52.653 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:34:53 compute-0 nova_compute[189296]: 2025-11-28 18:34:53.172 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:54 compute-0 nova_compute[189296]: 2025-11-28 18:34:54.135 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:58 compute-0 podman[256607]: 2025-11-28 18:34:58.035953172 +0000 UTC m=+0.091120830 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:34:58 compute-0 nova_compute[189296]: 2025-11-28 18:34:58.176 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:59 compute-0 nova_compute[189296]: 2025-11-28 18:34:59.137 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:34:59 compute-0 podman[203494]: time="2025-11-28T18:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:34:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:34:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 28 18:35:01 compute-0 openstack_network_exporter[205632]: ERROR   18:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:35:01 compute-0 openstack_network_exporter[205632]: ERROR   18:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:35:01 compute-0 openstack_network_exporter[205632]: ERROR   18:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:35:01 compute-0 openstack_network_exporter[205632]: ERROR   18:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:35:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:35:01 compute-0 openstack_network_exporter[205632]: ERROR   18:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:35:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:35:03 compute-0 nova_compute[189296]: 2025-11-28 18:35:03.181 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:04 compute-0 nova_compute[189296]: 2025-11-28 18:35:04.140 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:08 compute-0 nova_compute[189296]: 2025-11-28 18:35:08.188 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:09 compute-0 nova_compute[189296]: 2025-11-28 18:35:09.142 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:13 compute-0 podman[256636]: 2025-11-28 18:35:13.043010859 +0000 UTC m=+0.083940666 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 18:35:13 compute-0 podman[256630]: 2025-11-28 18:35:13.047049996 +0000 UTC m=+0.083335410 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Nov 28 18:35:13 compute-0 podman[256629]: 2025-11-28 18:35:13.056245141 +0000 UTC m=+0.114557672 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 18:35:13 compute-0 nova_compute[189296]: 2025-11-28 18:35:13.192 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:14 compute-0 nova_compute[189296]: 2025-11-28 18:35:14.146 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:18 compute-0 nova_compute[189296]: 2025-11-28 18:35:18.196 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:19 compute-0 podman[256685]: 2025-11-28 18:35:19.062453846 +0000 UTC m=+0.100837547 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 28 18:35:19 compute-0 podman[256684]: 2025-11-28 18:35:19.07126497 +0000 UTC m=+0.116589030 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:35:19 compute-0 podman[256687]: 2025-11-28 18:35:19.082304409 +0000 UTC m=+0.108114154 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:35:19 compute-0 podman[256686]: 2025-11-28 18:35:19.09960598 +0000 UTC m=+0.133667926 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30)
Nov 28 18:35:19 compute-0 nova_compute[189296]: 2025-11-28 18:35:19.148 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:22 compute-0 podman[256760]: 2025-11-28 18:35:22.099234692 +0000 UTC m=+0.146746646 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 28 18:35:23 compute-0 nova_compute[189296]: 2025-11-28 18:35:23.200 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:24 compute-0 nova_compute[189296]: 2025-11-28 18:35:24.150 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:28 compute-0 nova_compute[189296]: 2025-11-28 18:35:28.204 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:29 compute-0 podman[256784]: 2025-11-28 18:35:29.029492601 +0000 UTC m=+0.084888708 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:35:29 compute-0 nova_compute[189296]: 2025-11-28 18:35:29.152 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:29 compute-0 nova_compute[189296]: 2025-11-28 18:35:29.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:35:29 compute-0 nova_compute[189296]: 2025-11-28 18:35:29.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:35:29 compute-0 podman[203494]: time="2025-11-28T18:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:35:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:35:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Nov 28 18:35:31 compute-0 openstack_network_exporter[205632]: ERROR   18:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:35:31 compute-0 openstack_network_exporter[205632]: ERROR   18:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:35:31 compute-0 openstack_network_exporter[205632]: ERROR   18:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:35:31 compute-0 openstack_network_exporter[205632]: ERROR   18:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:35:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:35:31 compute-0 openstack_network_exporter[205632]: ERROR   18:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:35:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:35:33 compute-0 nova_compute[189296]: 2025-11-28 18:35:33.208 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:33 compute-0 nova_compute[189296]: 2025-11-28 18:35:33.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:35:34 compute-0 nova_compute[189296]: 2025-11-28 18:35:34.155 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:35 compute-0 nova_compute[189296]: 2025-11-28 18:35:35.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:35:35 compute-0 nova_compute[189296]: 2025-11-28 18:35:35.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:35:35 compute-0 nova_compute[189296]: 2025-11-28 18:35:35.926 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 28 18:35:35 compute-0 nova_compute[189296]: 2025-11-28 18:35:35.927 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquired lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 28 18:35:35 compute-0 nova_compute[189296]: 2025-11-28 18:35:35.927 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 28 18:35:36 compute-0 nova_compute[189296]: 2025-11-28 18:35:36.953 189300 DEBUG nova.network.neutron [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updating instance_info_cache with network_info: [{"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:35:36 compute-0 nova_compute[189296]: 2025-11-28 18:35:36.970 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Releasing lock "refresh_cache-bf6c3ac0-6e00-4be5-ae3a-454d022268e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 28 18:35:36 compute-0 nova_compute[189296]: 2025-11-28 18:35:36.971 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 28 18:35:36 compute-0 nova_compute[189296]: 2025-11-28 18:35:36.971 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:35:36 compute-0 nova_compute[189296]: 2025-11-28 18:35:36.971 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:35:38 compute-0 nova_compute[189296]: 2025-11-28 18:35:38.212 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:39 compute-0 nova_compute[189296]: 2025-11-28 18:35:39.157 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:39 compute-0 nova_compute[189296]: 2025-11-28 18:35:39.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.657 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.657 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.658 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.658 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.747 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.809 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.810 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.880 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.892 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.954 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:35:40 compute-0 nova_compute[189296]: 2025-11-28 18:35:40.957 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.019 189300 DEBUG oslo_concurrency.processutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.354 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.355 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4979MB free_disk=72.24920654296875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.356 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.356 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.425 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance 200bd8bc-d121-4a86-b728-ea98aac95adf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.425 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.426 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.426 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.479 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.502 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.504 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:35:41 compute-0 nova_compute[189296]: 2025-11-28 18:35:41.504 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:35:43 compute-0 nova_compute[189296]: 2025-11-28 18:35:43.216 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:43 compute-0 nova_compute[189296]: 2025-11-28 18:35:43.505 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:35:44 compute-0 podman[256819]: 2025-11-28 18:35:44.055347097 +0000 UTC m=+0.088328082 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 28 18:35:44 compute-0 podman[256821]: 2025-11-28 18:35:44.05627575 +0000 UTC m=+0.083174987 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:35:44 compute-0 podman[256818]: 2025-11-28 18:35:44.062704816 +0000 UTC m=+0.119541572 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Nov 28 18:35:44 compute-0 nova_compute[189296]: 2025-11-28 18:35:44.159 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:44 compute-0 nova_compute[189296]: 2025-11-28 18:35:44.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:35:48 compute-0 nova_compute[189296]: 2025-11-28 18:35:48.221 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:49 compute-0 nova_compute[189296]: 2025-11-28 18:35:49.166 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:50 compute-0 podman[256877]: 2025-11-28 18:35:50.04404318 +0000 UTC m=+0.091300814 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 28 18:35:50 compute-0 podman[256879]: 2025-11-28 18:35:50.068578228 +0000 UTC m=+0.085618736 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 28 18:35:50 compute-0 podman[256876]: 2025-11-28 18:35:50.073480967 +0000 UTC m=+0.119720526 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 28 18:35:50 compute-0 podman[256878]: 2025-11-28 18:35:50.098959597 +0000 UTC m=+0.128191702 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, architecture=x86_64, io.openshift.tags=base rhel9)
Nov 28 18:35:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:35:52.652 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:35:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:35:52.653 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:35:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:35:52.654 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:35:53 compute-0 podman[256949]: 2025-11-28 18:35:53.131860657 +0000 UTC m=+0.178436068 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 28 18:35:53 compute-0 nova_compute[189296]: 2025-11-28 18:35:53.224 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:54 compute-0 nova_compute[189296]: 2025-11-28 18:35:54.170 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:58 compute-0 nova_compute[189296]: 2025-11-28 18:35:58.227 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:59 compute-0 nova_compute[189296]: 2025-11-28 18:35:59.177 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:35:59 compute-0 podman[203494]: time="2025-11-28T18:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:35:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 28 18:35:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Nov 28 18:36:00 compute-0 podman[256974]: 2025-11-28 18:36:00.015428197 +0000 UTC m=+0.073397229 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:36:01 compute-0 openstack_network_exporter[205632]: ERROR   18:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:36:01 compute-0 openstack_network_exporter[205632]: ERROR   18:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:36:01 compute-0 openstack_network_exporter[205632]: ERROR   18:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:36:01 compute-0 openstack_network_exporter[205632]: ERROR   18:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:36:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:36:01 compute-0 openstack_network_exporter[205632]: ERROR   18:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:36:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:36:03 compute-0 nova_compute[189296]: 2025-11-28 18:36:03.230 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:04 compute-0 nova_compute[189296]: 2025-11-28 18:36:04.180 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.543 189300 DEBUG oslo_concurrency.lockutils [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "200bd8bc-d121-4a86-b728-ea98aac95adf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.544 189300 DEBUG oslo_concurrency.lockutils [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.544 189300 DEBUG oslo_concurrency.lockutils [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.545 189300 DEBUG oslo_concurrency.lockutils [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.546 189300 DEBUG oslo_concurrency.lockutils [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.549 189300 INFO nova.compute.manager [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Terminating instance#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.552 189300 DEBUG nova.compute.manager [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:36:06 compute-0 kernel: tap49c3cd00-3b (unregistering): left promiscuous mode
Nov 28 18:36:06 compute-0 NetworkManager[56307]: <info>  [1764354966.6158] device (tap49c3cd00-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:36:06 compute-0 ovn_controller[97771]: 2025-11-28T18:36:06Z|00184|binding|INFO|Releasing lport 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 from this chassis (sb_readonly=0)
Nov 28 18:36:06 compute-0 ovn_controller[97771]: 2025-11-28T18:36:06Z|00185|binding|INFO|Setting lport 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 down in Southbound
Nov 28 18:36:06 compute-0 ovn_controller[97771]: 2025-11-28T18:36:06Z|00186|binding|INFO|Removing iface tap49c3cd00-3b ovn-installed in OVS
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.637 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.640 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.656 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:fd:79 10.100.2.67'], port_security=['fa:16:3e:c6:fd:79 10.100.2.67'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.67/16', 'neutron:device_id': '200bd8bc-d121-4a86-b728-ea98aac95adf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c71a276f38f4bfebf1d3631d6f82966', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b7e19568-d693-4981-82d8-a6cf61584030', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21fa20d8-e3c8-4e6c-a5e8-bb4e198483f9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.658 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 in datapath a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5 unbound from our chassis#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.661 106624 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.673 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:06 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.689 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[71718076-0fac-47a8-a832-cebf62a257f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:06 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 7min 13.342s CPU time.
Nov 28 18:36:06 compute-0 systemd-machined[155703]: Machine qemu-16-instance-0000000f terminated.
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.738 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[e32e4a35-dafa-4e9f-a654-255d5549495e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.743 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[21af0554-07ae-4382-8972-0820d7c80205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.788 238923 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb0ea30-151d-4602-89bd-641dffdc1da0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.813 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[e02a8ba3-ce75-4202-baeb-178f6ba69080]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa60c0580-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:11:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527149, 'reachable_time': 38669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257013, 'error': None, 'target': 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.835 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4a19ee-7699-4d03-9a6d-85b295bc84a4]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapa60c0580-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527163, 'tstamp': 527163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257021, 'error': None, 'target': 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa60c0580-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527167, 'tstamp': 527167}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257021, 'error': None, 'target': 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.837 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa60c0580-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.839 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.844 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.844 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa60c0580-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.845 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.845 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa60c0580-50, col_values=(('external_ids', {'iface-id': '29b269a8-673c-48a9-bc1f-c180355b2c1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:36:06 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:06.845 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.854 189300 INFO nova.virt.libvirt.driver [-] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Instance destroyed successfully.#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.855 189300 DEBUG nova.objects.instance [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lazy-loading 'resources' on Instance uuid 200bd8bc-d121-4a86-b728-ea98aac95adf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.869 189300 DEBUG nova.virt.libvirt.vif [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6320023-asg-icnlxuc5b3sh-yo7geqqfagrq-txt7cjpn6wpw',id=15,image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:22:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='a12ef97f-9351-448f-95c7-ab90e2c7b098'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c71a276f38f4bfebf1d3631d6f82966',ramdisk_id='',reservation_id='r-88oymigz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-320555444',owner_user_name='tempest-PrometheusGabbiTest-320555444-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:22:06Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c1f6c07dc6c5400cbf4fa724992b16d3',uuid=200bd8bc-d121-4a86-b728-ea98aac95adf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.869 189300 DEBUG nova.network.os_vif_util [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converting VIF {"id": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "address": "fa:16:3e:c6:fd:79", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49c3cd00-3b", "ovs_interfaceid": "49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.870 189300 DEBUG nova.network.os_vif_util [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:fd:79,bridge_name='br-int',has_traffic_filtering=True,id=49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49c3cd00-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.871 189300 DEBUG os_vif [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:fd:79,bridge_name='br-int',has_traffic_filtering=True,id=49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49c3cd00-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.872 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.873 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49c3cd00-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.874 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.877 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.880 189300 INFO os_vif [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:fd:79,bridge_name='br-int',has_traffic_filtering=True,id=49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49c3cd00-3b')#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.880 189300 INFO nova.virt.libvirt.driver [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Deleting instance files /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf_del#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.881 189300 INFO nova.virt.libvirt.driver [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Deletion of /var/lib/nova/instances/200bd8bc-d121-4a86-b728-ea98aac95adf_del complete#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.939 189300 INFO nova.compute.manager [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.940 189300 DEBUG oslo.service.loopingcall [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.940 189300 DEBUG nova.compute.manager [-] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:36:06 compute-0 nova_compute[189296]: 2025-11-28 18:36:06.940 189300 DEBUG nova.network.neutron [-] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:36:07 compute-0 nova_compute[189296]: 2025-11-28 18:36:07.171 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:07.172 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:8b:d3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '02:a2:f8:d3:3f:9a'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:36:07 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:07.173 106624 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 28 18:36:07 compute-0 nova_compute[189296]: 2025-11-28 18:36:07.487 189300 DEBUG nova.compute.manager [req-1c4709fe-9ff5-476a-a502-e96fc6242a85 req-e9427113-5b4f-4707-80d0-c35e1f10c6eb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Received event network-vif-unplugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:36:07 compute-0 nova_compute[189296]: 2025-11-28 18:36:07.488 189300 DEBUG oslo_concurrency.lockutils [req-1c4709fe-9ff5-476a-a502-e96fc6242a85 req-e9427113-5b4f-4707-80d0-c35e1f10c6eb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:07 compute-0 nova_compute[189296]: 2025-11-28 18:36:07.488 189300 DEBUG oslo_concurrency.lockutils [req-1c4709fe-9ff5-476a-a502-e96fc6242a85 req-e9427113-5b4f-4707-80d0-c35e1f10c6eb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:07 compute-0 nova_compute[189296]: 2025-11-28 18:36:07.488 189300 DEBUG oslo_concurrency.lockutils [req-1c4709fe-9ff5-476a-a502-e96fc6242a85 req-e9427113-5b4f-4707-80d0-c35e1f10c6eb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:07 compute-0 nova_compute[189296]: 2025-11-28 18:36:07.488 189300 DEBUG nova.compute.manager [req-1c4709fe-9ff5-476a-a502-e96fc6242a85 req-e9427113-5b4f-4707-80d0-c35e1f10c6eb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] No waiting events found dispatching network-vif-unplugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:36:07 compute-0 nova_compute[189296]: 2025-11-28 18:36:07.489 189300 DEBUG nova.compute.manager [req-1c4709fe-9ff5-476a-a502-e96fc6242a85 req-e9427113-5b4f-4707-80d0-c35e1f10c6eb 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Received event network-vif-unplugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:36:08 compute-0 nova_compute[189296]: 2025-11-28 18:36:08.206 189300 DEBUG nova.network.neutron [-] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:36:08 compute-0 nova_compute[189296]: 2025-11-28 18:36:08.225 189300 INFO nova.compute.manager [-] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Took 1.28 seconds to deallocate network for instance.#033[00m
Nov 28 18:36:08 compute-0 nova_compute[189296]: 2025-11-28 18:36:08.278 189300 DEBUG oslo_concurrency.lockutils [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:08 compute-0 nova_compute[189296]: 2025-11-28 18:36:08.279 189300 DEBUG oslo_concurrency.lockutils [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:08 compute-0 nova_compute[189296]: 2025-11-28 18:36:08.292 189300 DEBUG nova.compute.manager [req-5ff2ee59-3d57-4f2b-88dc-8284bf7b5391 req-227e28f1-01b7-4118-a7cb-d0d30cbea2df 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Received event network-vif-deleted-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:36:08 compute-0 nova_compute[189296]: 2025-11-28 18:36:08.381 189300 DEBUG nova.compute.provider_tree [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:36:08 compute-0 nova_compute[189296]: 2025-11-28 18:36:08.401 189300 DEBUG nova.scheduler.client.report [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:36:08 compute-0 nova_compute[189296]: 2025-11-28 18:36:08.433 189300 DEBUG oslo_concurrency.lockutils [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:08 compute-0 nova_compute[189296]: 2025-11-28 18:36:08.459 189300 INFO nova.scheduler.client.report [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Deleted allocations for instance 200bd8bc-d121-4a86-b728-ea98aac95adf#033[00m
Nov 28 18:36:08 compute-0 nova_compute[189296]: 2025-11-28 18:36:08.540 189300 DEBUG oslo_concurrency.lockutils [None req-d9e921f2-85a9-43c5-b8a5-7baeca3214e5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:09 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:09.176 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d60b742f-7e94-4137-b50a-cfc8eac54167, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:36:09 compute-0 nova_compute[189296]: 2025-11-28 18:36:09.183 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:09 compute-0 nova_compute[189296]: 2025-11-28 18:36:09.614 189300 DEBUG nova.compute.manager [req-e4b75d83-f1f8-48bd-85ea-4e4e0613c787 req-9f81cc01-8d36-432d-b00e-9ceec3fc3a58 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Received event network-vif-plugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:36:09 compute-0 nova_compute[189296]: 2025-11-28 18:36:09.615 189300 DEBUG oslo_concurrency.lockutils [req-e4b75d83-f1f8-48bd-85ea-4e4e0613c787 req-9f81cc01-8d36-432d-b00e-9ceec3fc3a58 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:09 compute-0 nova_compute[189296]: 2025-11-28 18:36:09.615 189300 DEBUG oslo_concurrency.lockutils [req-e4b75d83-f1f8-48bd-85ea-4e4e0613c787 req-9f81cc01-8d36-432d-b00e-9ceec3fc3a58 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:09 compute-0 nova_compute[189296]: 2025-11-28 18:36:09.615 189300 DEBUG oslo_concurrency.lockutils [req-e4b75d83-f1f8-48bd-85ea-4e4e0613c787 req-9f81cc01-8d36-432d-b00e-9ceec3fc3a58 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "200bd8bc-d121-4a86-b728-ea98aac95adf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:09 compute-0 nova_compute[189296]: 2025-11-28 18:36:09.616 189300 DEBUG nova.compute.manager [req-e4b75d83-f1f8-48bd-85ea-4e4e0613c787 req-9f81cc01-8d36-432d-b00e-9ceec3fc3a58 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] No waiting events found dispatching network-vif-plugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:36:09 compute-0 nova_compute[189296]: 2025-11-28 18:36:09.616 189300 WARNING nova.compute.manager [req-e4b75d83-f1f8-48bd-85ea-4e4e0613c787 req-9f81cc01-8d36-432d-b00e-9ceec3fc3a58 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Received unexpected event network-vif-plugged-49c3cd00-3b7b-4e6b-ab4e-e199f5d0c8c7 for instance with vm_state deleted and task_state None.#033[00m
Nov 28 18:36:11 compute-0 nova_compute[189296]: 2025-11-28 18:36:11.876 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:14 compute-0 nova_compute[189296]: 2025-11-28 18:36:14.195 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:14 compute-0 podman[257029]: 2025-11-28 18:36:14.779100267 +0000 UTC m=+0.088474935 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Nov 28 18:36:14 compute-0 podman[257030]: 2025-11-28 18:36:14.783442893 +0000 UTC m=+0.094443101 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=f26160204c78771e78cdd2489258319b)
Nov 28 18:36:14 compute-0 podman[257031]: 2025-11-28 18:36:14.810097612 +0000 UTC m=+0.121010458 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.286 189300 DEBUG oslo_concurrency.lockutils [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.287 189300 DEBUG oslo_concurrency.lockutils [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.288 189300 DEBUG oslo_concurrency.lockutils [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.288 189300 DEBUG oslo_concurrency.lockutils [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.289 189300 DEBUG oslo_concurrency.lockutils [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.292 189300 INFO nova.compute.manager [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Terminating instance#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.294 189300 DEBUG nova.compute.manager [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 28 18:36:16 compute-0 kernel: tap0a072d7e-c1 (unregistering): left promiscuous mode
Nov 28 18:36:16 compute-0 NetworkManager[56307]: <info>  [1764354976.3386] device (tap0a072d7e-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 28 18:36:16 compute-0 ovn_controller[97771]: 2025-11-28T18:36:16Z|00187|binding|INFO|Releasing lport 0a072d7e-c128-48b9-9754-327584bc3579 from this chassis (sb_readonly=0)
Nov 28 18:36:16 compute-0 ovn_controller[97771]: 2025-11-28T18:36:16Z|00188|binding|INFO|Setting lport 0a072d7e-c128-48b9-9754-327584bc3579 down in Southbound
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.347 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:16 compute-0 ovn_controller[97771]: 2025-11-28T18:36:16Z|00189|binding|INFO|Removing iface tap0a072d7e-c1 ovn-installed in OVS
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.356 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.363 106624 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:e2:c9 10.100.1.22'], port_security=['fa:16:3e:c4:e2:c9 10.100.1.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.22/16', 'neutron:device_id': 'bf6c3ac0-6e00-4be5-ae3a-454d022268e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c71a276f38f4bfebf1d3631d6f82966', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b7e19568-d693-4981-82d8-a6cf61584030', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21fa20d8-e3c8-4e6c-a5e8-bb4e198483f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>], logical_port=0a072d7e-c128-48b9-9754-327584bc3579) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb303cb47c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.364 106624 INFO neutron.agent.ovn.metadata.agent [-] Port 0a072d7e-c128-48b9-9754-327584bc3579 in datapath a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5 unbound from our chassis#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.366 106624 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.367 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4aaf28-c109-4567-a1fb-9a59672ff07f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.367 106624 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5 namespace which is not needed anymore#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.384 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:16 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 28 18:36:16 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Consumed 6min 41.360s CPU time.
Nov 28 18:36:16 compute-0 systemd-machined[155703]: Machine qemu-17-instance-00000010 terminated.
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.526 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:16 compute-0 neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5[251677]: [NOTICE]   (251681) : haproxy version is 2.8.14-c23fe91
Nov 28 18:36:16 compute-0 neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5[251677]: [NOTICE]   (251681) : path to executable is /usr/sbin/haproxy
Nov 28 18:36:16 compute-0 neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5[251677]: [WARNING]  (251681) : Exiting Master process...
Nov 28 18:36:16 compute-0 neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5[251677]: [ALERT]    (251681) : Current worker (251683) exited with code 143 (Terminated)
Nov 28 18:36:16 compute-0 neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5[251677]: [WARNING]  (251681) : All workers exited. Exiting... (0)
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.533 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:16 compute-0 systemd[1]: libpod-e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1.scope: Deactivated successfully.
Nov 28 18:36:16 compute-0 conmon[251677]: conmon e82dde58cd74a6b246d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1.scope/container/memory.events
Nov 28 18:36:16 compute-0 podman[257114]: 2025-11-28 18:36:16.541902161 +0000 UTC m=+0.062842561 container died e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.584 189300 INFO nova.virt.libvirt.driver [-] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Instance destroyed successfully.#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.585 189300 DEBUG nova.objects.instance [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lazy-loading 'resources' on Instance uuid bf6c3ac0-6e00-4be5-ae3a-454d022268e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.597 189300 DEBUG nova.virt.libvirt.vif [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T18:26:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6320023-asg-icnlxuc5b3sh-jn4jl2rfhndo-7le3q67p2hx5',id=16,image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-28T18:26:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='a12ef97f-9351-448f-95c7-ab90e2c7b098'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c71a276f38f4bfebf1d3631d6f82966',ramdisk_id='',reservation_id='r-tkz6hxoq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7d5268e2-45b5-44b2-b3c1-3da9b27b258e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-320555444',owner_user_name='tempest-PrometheusGabbiTest-320555444-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-28T18:26:20Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='c1f6c07dc6c5400cbf4fa724992b16d3',uuid=bf6c3ac0-6e00-4be5-ae3a-454d022268e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.598 189300 DEBUG nova.network.os_vif_util [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converting VIF {"id": "0a072d7e-c128-48b9-9754-327584bc3579", "address": "fa:16:3e:c4:e2:c9", "network": {"id": "a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c71a276f38f4bfebf1d3631d6f82966", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a072d7e-c1", "ovs_interfaceid": "0a072d7e-c128-48b9-9754-327584bc3579", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.598 189300 DEBUG nova.network.os_vif_util [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:e2:c9,bridge_name='br-int',has_traffic_filtering=True,id=0a072d7e-c128-48b9-9754-327584bc3579,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a072d7e-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.599 189300 DEBUG os_vif [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:e2:c9,bridge_name='br-int',has_traffic_filtering=True,id=0a072d7e-c128-48b9-9754-327584bc3579,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a072d7e-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.600 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.601 189300 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a072d7e-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.602 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.604 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.606 189300 INFO os_vif [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:e2:c9,bridge_name='br-int',has_traffic_filtering=True,id=0a072d7e-c128-48b9-9754-327584bc3579,network=Network(a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a072d7e-c1')#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.607 189300 INFO nova.virt.libvirt.driver [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Deleting instance files /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8_del#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.608 189300 INFO nova.virt.libvirt.driver [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Deletion of /var/lib/nova/instances/bf6c3ac0-6e00-4be5-ae3a-454d022268e8_del complete#033[00m
Nov 28 18:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1-userdata-shm.mount: Deactivated successfully.
Nov 28 18:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-279c700c0206dec5b3a1826f0e1709abd6d463665464c7069a38e39e3b74981d-merged.mount: Deactivated successfully.
Nov 28 18:36:16 compute-0 podman[257114]: 2025-11-28 18:36:16.643534947 +0000 UTC m=+0.164475347 container cleanup e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 28 18:36:16 compute-0 systemd[1]: libpod-conmon-e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1.scope: Deactivated successfully.
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.694 189300 INFO nova.compute.manager [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.695 189300 DEBUG oslo.service.loopingcall [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.696 189300 DEBUG nova.compute.manager [-] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.696 189300 DEBUG nova.network.neutron [-] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 28 18:36:16 compute-0 podman[257158]: 2025-11-28 18:36:16.748940464 +0000 UTC m=+0.068268954 container remove e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.757 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[141c9fa6-61e0-44aa-b32b-a1b00d13ac52]: (4, ('Fri Nov 28 06:36:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5 (e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1)\ne82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1\nFri Nov 28 06:36:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5 (e82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1)\ne82dde58cd74a6b246d5d80527195a2f0196be3cf7b63d7dfc71db4a45b8e7b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.759 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[7e322f2d-bb53-4306-8ccc-fd6efa65b71d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.761 106624 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa60c0580-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.763 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:16 compute-0 kernel: tapa60c0580-50: left promiscuous mode
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.769 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[ef10d574-543b-44e3-846b-d30054ba01a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:16 compute-0 nova_compute[189296]: 2025-11-28 18:36:16.778 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.792 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe536c7-77ec-4acb-ab84-5b2578e6b383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.794 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[015ac8aa-791a-44e9-b121-4ad78441fcfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.812 238909 DEBUG oslo.privsep.daemon [-] privsep: reply[fadf1ad8-d606-4dfd-bfb2-0d5b9c3efa34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527141, 'reachable_time': 25845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257173, 'error': None, 'target': 'ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:16 compute-0 systemd[1]: run-netns-ovnmeta\x2da60c0580\x2d5b99\x2d46d0\x2dab1c\x2d07a8ebf4a3e5.mount: Deactivated successfully.
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.815 106734 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a60c0580-5b99-46d0-ab1c-07a8ebf4a3e5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 28 18:36:16 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:16.815 106734 DEBUG oslo.privsep.daemon [-] privsep: reply[87981caa-b4a7-4392-826e-93566597ac67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 28 18:36:17 compute-0 nova_compute[189296]: 2025-11-28 18:36:17.213 189300 DEBUG nova.compute.manager [req-d013ea46-4266-4108-8aad-3ec9cdef8ead req-39764466-54ff-4b77-a0d6-8a6e50bc5b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Received event network-vif-unplugged-0a072d7e-c128-48b9-9754-327584bc3579 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:36:17 compute-0 nova_compute[189296]: 2025-11-28 18:36:17.213 189300 DEBUG oslo_concurrency.lockutils [req-d013ea46-4266-4108-8aad-3ec9cdef8ead req-39764466-54ff-4b77-a0d6-8a6e50bc5b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:17 compute-0 nova_compute[189296]: 2025-11-28 18:36:17.214 189300 DEBUG oslo_concurrency.lockutils [req-d013ea46-4266-4108-8aad-3ec9cdef8ead req-39764466-54ff-4b77-a0d6-8a6e50bc5b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:17 compute-0 nova_compute[189296]: 2025-11-28 18:36:17.214 189300 DEBUG oslo_concurrency.lockutils [req-d013ea46-4266-4108-8aad-3ec9cdef8ead req-39764466-54ff-4b77-a0d6-8a6e50bc5b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:17 compute-0 nova_compute[189296]: 2025-11-28 18:36:17.215 189300 DEBUG nova.compute.manager [req-d013ea46-4266-4108-8aad-3ec9cdef8ead req-39764466-54ff-4b77-a0d6-8a6e50bc5b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] No waiting events found dispatching network-vif-unplugged-0a072d7e-c128-48b9-9754-327584bc3579 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:36:17 compute-0 nova_compute[189296]: 2025-11-28 18:36:17.216 189300 DEBUG nova.compute.manager [req-d013ea46-4266-4108-8aad-3ec9cdef8ead req-39764466-54ff-4b77-a0d6-8a6e50bc5b0b 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Received event network-vif-unplugged-0a072d7e-c128-48b9-9754-327584bc3579 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 28 18:36:17 compute-0 nova_compute[189296]: 2025-11-28 18:36:17.963 189300 DEBUG nova.network.neutron [-] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 28 18:36:17 compute-0 nova_compute[189296]: 2025-11-28 18:36:17.986 189300 INFO nova.compute.manager [-] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Took 1.29 seconds to deallocate network for instance.#033[00m
Nov 28 18:36:18 compute-0 nova_compute[189296]: 2025-11-28 18:36:18.054 189300 DEBUG oslo_concurrency.lockutils [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:18 compute-0 nova_compute[189296]: 2025-11-28 18:36:18.055 189300 DEBUG oslo_concurrency.lockutils [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:18 compute-0 nova_compute[189296]: 2025-11-28 18:36:18.131 189300 DEBUG nova.compute.provider_tree [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:36:18 compute-0 nova_compute[189296]: 2025-11-28 18:36:18.148 189300 DEBUG nova.scheduler.client.report [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:36:18 compute-0 nova_compute[189296]: 2025-11-28 18:36:18.175 189300 DEBUG oslo_concurrency.lockutils [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:18 compute-0 nova_compute[189296]: 2025-11-28 18:36:18.215 189300 INFO nova.scheduler.client.report [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Deleted allocations for instance bf6c3ac0-6e00-4be5-ae3a-454d022268e8#033[00m
Nov 28 18:36:18 compute-0 nova_compute[189296]: 2025-11-28 18:36:18.290 189300 DEBUG oslo_concurrency.lockutils [None req-fc7d6209-7885-4461-8ed2-5b207be870f5 c1f6c07dc6c5400cbf4fa724992b16d3 4c71a276f38f4bfebf1d3631d6f82966 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:19 compute-0 nova_compute[189296]: 2025-11-28 18:36:19.199 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:19 compute-0 nova_compute[189296]: 2025-11-28 18:36:19.341 189300 DEBUG nova.compute.manager [req-948dbfb2-4026-4458-a33f-0cd0c54be631 req-2ac8d6a7-1ef5-438c-a356-3677c0857fe0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Received event network-vif-plugged-0a072d7e-c128-48b9-9754-327584bc3579 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:36:19 compute-0 nova_compute[189296]: 2025-11-28 18:36:19.342 189300 DEBUG oslo_concurrency.lockutils [req-948dbfb2-4026-4458-a33f-0cd0c54be631 req-2ac8d6a7-1ef5-438c-a356-3677c0857fe0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Acquiring lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:19 compute-0 nova_compute[189296]: 2025-11-28 18:36:19.343 189300 DEBUG oslo_concurrency.lockutils [req-948dbfb2-4026-4458-a33f-0cd0c54be631 req-2ac8d6a7-1ef5-438c-a356-3677c0857fe0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:19 compute-0 nova_compute[189296]: 2025-11-28 18:36:19.344 189300 DEBUG oslo_concurrency.lockutils [req-948dbfb2-4026-4458-a33f-0cd0c54be631 req-2ac8d6a7-1ef5-438c-a356-3677c0857fe0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] Lock "bf6c3ac0-6e00-4be5-ae3a-454d022268e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:19 compute-0 nova_compute[189296]: 2025-11-28 18:36:19.344 189300 DEBUG nova.compute.manager [req-948dbfb2-4026-4458-a33f-0cd0c54be631 req-2ac8d6a7-1ef5-438c-a356-3677c0857fe0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] No waiting events found dispatching network-vif-plugged-0a072d7e-c128-48b9-9754-327584bc3579 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 28 18:36:19 compute-0 nova_compute[189296]: 2025-11-28 18:36:19.345 189300 WARNING nova.compute.manager [req-948dbfb2-4026-4458-a33f-0cd0c54be631 req-2ac8d6a7-1ef5-438c-a356-3677c0857fe0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Received unexpected event network-vif-plugged-0a072d7e-c128-48b9-9754-327584bc3579 for instance with vm_state deleted and task_state None.#033[00m
Nov 28 18:36:19 compute-0 nova_compute[189296]: 2025-11-28 18:36:19.346 189300 DEBUG nova.compute.manager [req-948dbfb2-4026-4458-a33f-0cd0c54be631 req-2ac8d6a7-1ef5-438c-a356-3677c0857fe0 16461f23345b43c38a5da830c546576c 7b180a23d9594557b641c965531d79a1 - - default default] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Received event network-vif-deleted-0a072d7e-c128-48b9-9754-327584bc3579 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 28 18:36:21 compute-0 podman[257175]: 2025-11-28 18:36:21.039244997 +0000 UTC m=+0.081447754 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:36:21 compute-0 podman[257177]: 2025-11-28 18:36:21.06113789 +0000 UTC m=+0.104273729 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 28 18:36:21 compute-0 podman[257176]: 2025-11-28 18:36:21.061997572 +0000 UTC m=+0.112083281 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 28 18:36:21 compute-0 podman[257178]: 2025-11-28 18:36:21.092946855 +0000 UTC m=+0.117155374 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm)
Nov 28 18:36:21 compute-0 nova_compute[189296]: 2025-11-28 18:36:21.607 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:21 compute-0 nova_compute[189296]: 2025-11-28 18:36:21.850 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764354966.8489487, 200bd8bc-d121-4a86-b728-ea98aac95adf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:36:21 compute-0 nova_compute[189296]: 2025-11-28 18:36:21.851 189300 INFO nova.compute.manager [-] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:36:21 compute-0 nova_compute[189296]: 2025-11-28 18:36:21.870 189300 DEBUG nova.compute.manager [None req-c1dc9bda-0a36-4ca8-aa8c-b4d50d9da0ba - - - - - -] [instance: 200bd8bc-d121-4a86-b728-ea98aac95adf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:36:24 compute-0 podman[257253]: 2025-11-28 18:36:24.043472027 +0000 UTC m=+0.099328710 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 28 18:36:24 compute-0 nova_compute[189296]: 2025-11-28 18:36:24.204 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:26 compute-0 nova_compute[189296]: 2025-11-28 18:36:26.612 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:29 compute-0 nova_compute[189296]: 2025-11-28 18:36:29.205 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:29 compute-0 podman[203494]: time="2025-11-28T18:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:36:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:36:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4326 "" "Go-http-client/1.1"
Nov 28 18:36:31 compute-0 podman[257276]: 2025-11-28 18:36:31.091295412 +0000 UTC m=+0.138491294 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:36:31 compute-0 nova_compute[189296]: 2025-11-28 18:36:31.273 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:31 compute-0 openstack_network_exporter[205632]: ERROR   18:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:36:31 compute-0 openstack_network_exporter[205632]: ERROR   18:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:36:31 compute-0 openstack_network_exporter[205632]: ERROR   18:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:36:31 compute-0 openstack_network_exporter[205632]: ERROR   18:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:36:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:36:31 compute-0 openstack_network_exporter[205632]: ERROR   18:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:36:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:36:31 compute-0 nova_compute[189296]: 2025-11-28 18:36:31.580 189300 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764354976.5782294, bf6c3ac0-6e00-4be5-ae3a-454d022268e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 28 18:36:31 compute-0 nova_compute[189296]: 2025-11-28 18:36:31.580 189300 INFO nova.compute.manager [-] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] VM Stopped (Lifecycle Event)#033[00m
Nov 28 18:36:31 compute-0 nova_compute[189296]: 2025-11-28 18:36:31.604 189300 DEBUG nova.compute.manager [None req-22a14c52-66cd-4b94-a122-be42fdd0fa3e - - - - - -] [instance: bf6c3ac0-6e00-4be5-ae3a-454d022268e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 28 18:36:31 compute-0 nova_compute[189296]: 2025-11-28 18:36:31.616 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:31 compute-0 nova_compute[189296]: 2025-11-28 18:36:31.619 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:36:31 compute-0 nova_compute[189296]: 2025-11-28 18:36:31.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:36:34 compute-0 nova_compute[189296]: 2025-11-28 18:36:34.209 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:34 compute-0 nova_compute[189296]: 2025-11-28 18:36:34.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:36:35 compute-0 nova_compute[189296]: 2025-11-28 18:36:35.627 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:36:35 compute-0 nova_compute[189296]: 2025-11-28 18:36:35.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:36:35 compute-0 nova_compute[189296]: 2025-11-28 18:36:35.628 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:36:35 compute-0 nova_compute[189296]: 2025-11-28 18:36:35.644 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 18:36:36 compute-0 nova_compute[189296]: 2025-11-28 18:36:36.619 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:36 compute-0 nova_compute[189296]: 2025-11-28 18:36:36.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:36:36 compute-0 nova_compute[189296]: 2025-11-28 18:36:36.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:36:39 compute-0 nova_compute[189296]: 2025-11-28 18:36:39.211 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:41 compute-0 nova_compute[189296]: 2025-11-28 18:36:41.623 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:41 compute-0 nova_compute[189296]: 2025-11-28 18:36:41.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:36:42 compute-0 nova_compute[189296]: 2025-11-28 18:36:42.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:36:42 compute-0 nova_compute[189296]: 2025-11-28 18:36:42.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:36:42 compute-0 nova_compute[189296]: 2025-11-28 18:36:42.657 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:42 compute-0 nova_compute[189296]: 2025-11-28 18:36:42.658 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:42 compute-0 nova_compute[189296]: 2025-11-28 18:36:42.658 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:42 compute-0 nova_compute[189296]: 2025-11-28 18:36:42.659 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:36:42 compute-0 nova_compute[189296]: 2025-11-28 18:36:42.980 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:36:42 compute-0 nova_compute[189296]: 2025-11-28 18:36:42.981 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5366MB free_disk=72.30709075927734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:36:42 compute-0 nova_compute[189296]: 2025-11-28 18:36:42.982 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:42 compute-0 nova_compute[189296]: 2025-11-28 18:36:42.982 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.033 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.033 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.054 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing inventories for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.070 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating ProviderTree inventory for provider d10a9930-4504-4222-97f7-6727a5a2d43b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.070 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Updating inventory in ProviderTree for provider d10a9930-4504-4222-97f7-6727a5a2d43b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.084 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing aggregate associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.110 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Refreshing trait associations for resource provider d10a9930-4504-4222-97f7-6727a5a2d43b, traits: HW_CPU_X86_ABM,COMPUTE_NODE,HW_CPU_X86_SVM,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_SATA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.132 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.151 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.175 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:36:43 compute-0 nova_compute[189296]: 2025-11-28 18:36:43.176 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:44 compute-0 nova_compute[189296]: 2025-11-28 18:36:44.215 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:45 compute-0 podman[257302]: 2025-11-28 18:36:45.037221085 +0000 UTC m=+0.089534012 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7)
Nov 28 18:36:45 compute-0 podman[257304]: 2025-11-28 18:36:45.056905544 +0000 UTC m=+0.099407542 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:36:45 compute-0 podman[257303]: 2025-11-28 18:36:45.069861749 +0000 UTC m=+0.118531817 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:36:46 compute-0 nova_compute[189296]: 2025-11-28 18:36:46.185 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:36:46 compute-0 nova_compute[189296]: 2025-11-28 18:36:46.628 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:49 compute-0 nova_compute[189296]: 2025-11-28 18:36:49.218 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:49 compute-0 nova_compute[189296]: 2025-11-28 18:36:49.623 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:36:51 compute-0 nova_compute[189296]: 2025-11-28 18:36:51.636 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:51.994 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 28 18:36:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:51.994 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 28 18:36:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:51.995 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:51.996 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fc143395760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:51 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:51.998 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433971a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc147365a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:51.999 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc146255ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433972c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1434082c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.000 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1444a0380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397b90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.001 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fc1433970b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.001 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc1433973b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.002 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.002 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fc1433971d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.002 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.003 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fc143397c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.003 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.003 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fc143397620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.003 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.003 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fc143397260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.002 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.003 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.004 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fc143397290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.004 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.003 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.004 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fc143408290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.005 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.005 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fc1433972f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.004 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.005 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.005 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.006 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fc144640f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.006 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.007 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fc1433976b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.007 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.007 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fc143397fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.007 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.007 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fc14457db80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.007 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.007 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fc143397950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.007 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.008 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fc143397380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.006 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc14451f530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.008 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.008 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.008 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fc143397bc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.009 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.009 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.009 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fc1433973e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.010 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.010 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fc143397c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.010 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.010 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.010 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fc143397ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.011 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.011 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fc1460ad370>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.011 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.011 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.012 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fc143397d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.012 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc143397f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.012 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.013 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fc145ac7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fc142f050d0>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets.error': [], 'cpu': [], 'network.incoming.bytes.rate': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.013 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fc143397e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.013 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.013 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fc143397650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.014 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.014 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fc143397e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.014 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.014 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fc143397f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.014 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.014 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fc143397230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fc14451fb00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.014 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.015 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.015 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.015 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.015 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.015 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.015 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.015 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.015 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.016 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.016 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.016 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.016 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.016 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.016 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.016 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.016 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.016 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.017 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.017 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.017 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.017 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.017 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.017 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.017 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.017 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 ceilometer_agent_compute[200020]: 2025-11-28 18:36:52.017 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 28 18:36:52 compute-0 podman[257361]: 2025-11-28 18:36:52.048699443 +0000 UTC m=+0.100842847 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 28 18:36:52 compute-0 podman[257364]: 2025-11-28 18:36:52.062652223 +0000 UTC m=+0.101872863 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm)
Nov 28 18:36:52 compute-0 podman[257363]: 2025-11-28 18:36:52.093963105 +0000 UTC m=+0.128113931 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0)
Nov 28 18:36:52 compute-0 podman[257362]: 2025-11-28 18:36:52.10405522 +0000 UTC m=+0.140145934 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 28 18:36:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:52.656 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:36:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:52.657 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:36:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:36:52.657 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:36:54 compute-0 nova_compute[189296]: 2025-11-28 18:36:54.222 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:55 compute-0 podman[257439]: 2025-11-28 18:36:55.096617417 +0000 UTC m=+0.143923986 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 28 18:36:56 compute-0 nova_compute[189296]: 2025-11-28 18:36:56.642 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:59 compute-0 nova_compute[189296]: 2025-11-28 18:36:59.224 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:36:59 compute-0 podman[203494]: time="2025-11-28T18:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:36:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:36:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4326 "" "Go-http-client/1.1"
Nov 28 18:37:01 compute-0 openstack_network_exporter[205632]: ERROR   18:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:37:01 compute-0 openstack_network_exporter[205632]: ERROR   18:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:37:01 compute-0 openstack_network_exporter[205632]: ERROR   18:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:37:01 compute-0 openstack_network_exporter[205632]: ERROR   18:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:37:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:37:01 compute-0 openstack_network_exporter[205632]: ERROR   18:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:37:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:37:01 compute-0 nova_compute[189296]: 2025-11-28 18:37:01.646 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:02 compute-0 podman[257464]: 2025-11-28 18:37:02.015590792 +0000 UTC m=+0.068565441 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:37:04 compute-0 nova_compute[189296]: 2025-11-28 18:37:04.228 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:06 compute-0 nova_compute[189296]: 2025-11-28 18:37:06.650 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:09 compute-0 nova_compute[189296]: 2025-11-28 18:37:09.231 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:09 compute-0 ovn_controller[97771]: 2025-11-28T18:37:09Z|00190|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Nov 28 18:37:11 compute-0 nova_compute[189296]: 2025-11-28 18:37:11.656 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:14 compute-0 nova_compute[189296]: 2025-11-28 18:37:14.233 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:16 compute-0 podman[257488]: 2025-11-28 18:37:16.088938625 +0000 UTC m=+0.135097572 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 28 18:37:16 compute-0 podman[257489]: 2025-11-28 18:37:16.106888411 +0000 UTC m=+0.142378908 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 28 18:37:16 compute-0 podman[257490]: 2025-11-28 18:37:16.134926584 +0000 UTC m=+0.165237546 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd)
Nov 28 18:37:16 compute-0 nova_compute[189296]: 2025-11-28 18:37:16.661 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:19 compute-0 nova_compute[189296]: 2025-11-28 18:37:19.236 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:21 compute-0 nova_compute[189296]: 2025-11-28 18:37:21.666 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:23 compute-0 podman[257549]: 2025-11-28 18:37:23.084708151 +0000 UTC m=+0.133741778 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 28 18:37:23 compute-0 podman[257552]: 2025-11-28 18:37:23.086938546 +0000 UTC m=+0.114274975 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 28 18:37:23 compute-0 podman[257551]: 2025-11-28 18:37:23.092761577 +0000 UTC m=+0.135696885 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc.)
Nov 28 18:37:23 compute-0 podman[257550]: 2025-11-28 18:37:23.111080134 +0000 UTC m=+0.157681752 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 28 18:37:24 compute-0 nova_compute[189296]: 2025-11-28 18:37:24.239 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:26 compute-0 podman[257626]: 2025-11-28 18:37:26.143634703 +0000 UTC m=+0.191549767 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 28 18:37:26 compute-0 nova_compute[189296]: 2025-11-28 18:37:26.670 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:29 compute-0 nova_compute[189296]: 2025-11-28 18:37:29.243 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:29 compute-0 podman[203494]: time="2025-11-28T18:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:37:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:37:29 compute-0 podman[203494]: @ - - [28/Nov/2025:18:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Nov 28 18:37:31 compute-0 openstack_network_exporter[205632]: ERROR   18:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:37:31 compute-0 openstack_network_exporter[205632]: ERROR   18:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:37:31 compute-0 openstack_network_exporter[205632]: ERROR   18:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:37:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:37:31 compute-0 openstack_network_exporter[205632]: ERROR   18:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:37:31 compute-0 openstack_network_exporter[205632]: ERROR   18:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:37:31 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:37:31 compute-0 nova_compute[189296]: 2025-11-28 18:37:31.674 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:32 compute-0 nova_compute[189296]: 2025-11-28 18:37:32.636 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:37:33 compute-0 podman[257654]: 2025-11-28 18:37:33.044698432 +0000 UTC m=+0.104767873 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 28 18:37:33 compute-0 nova_compute[189296]: 2025-11-28 18:37:33.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:37:34 compute-0 nova_compute[189296]: 2025-11-28 18:37:34.245 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:36 compute-0 nova_compute[189296]: 2025-11-28 18:37:36.624 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:37:36 compute-0 nova_compute[189296]: 2025-11-28 18:37:36.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:37:36 compute-0 nova_compute[189296]: 2025-11-28 18:37:36.625 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 28 18:37:36 compute-0 nova_compute[189296]: 2025-11-28 18:37:36.679 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:37 compute-0 nova_compute[189296]: 2025-11-28 18:37:37.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:37:37 compute-0 nova_compute[189296]: 2025-11-28 18:37:37.627 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 28 18:37:37 compute-0 nova_compute[189296]: 2025-11-28 18:37:37.628 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 28 18:37:37 compute-0 nova_compute[189296]: 2025-11-28 18:37:37.652 189300 DEBUG nova.compute.manager [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 28 18:37:39 compute-0 nova_compute[189296]: 2025-11-28 18:37:39.249 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:41 compute-0 nova_compute[189296]: 2025-11-28 18:37:41.683 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:43 compute-0 nova_compute[189296]: 2025-11-28 18:37:43.625 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:37:43 compute-0 nova_compute[189296]: 2025-11-28 18:37:43.626 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:37:43 compute-0 nova_compute[189296]: 2025-11-28 18:37:43.661 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:37:43 compute-0 nova_compute[189296]: 2025-11-28 18:37:43.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:37:43 compute-0 nova_compute[189296]: 2025-11-28 18:37:43.662 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:37:43 compute-0 nova_compute[189296]: 2025-11-28 18:37:43.663 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.118 189300 WARNING nova.virt.libvirt.driver [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.119 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5365MB free_disk=72.30709075927734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.120 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.120 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.209 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.210 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.248 189300 DEBUG nova.compute.provider_tree [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed in ProviderTree for provider: d10a9930-4504-4222-97f7-6727a5a2d43b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.254 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.265 189300 DEBUG nova.scheduler.client.report [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Inventory has not changed for provider d10a9930-4504-4222-97f7-6727a5a2d43b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.267 189300 DEBUG nova.compute.resource_tracker [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 28 18:37:44 compute-0 nova_compute[189296]: 2025-11-28 18:37:44.267 189300 DEBUG oslo_concurrency.lockutils [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:37:45 compute-0 nova_compute[189296]: 2025-11-28 18:37:45.267 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:37:46 compute-0 nova_compute[189296]: 2025-11-28 18:37:46.629 189300 DEBUG oslo_service.periodic_task [None req-95ed667c-e99a-4299-850d-14c3ed924afc - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 28 18:37:46 compute-0 nova_compute[189296]: 2025-11-28 18:37:46.689 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:47 compute-0 podman[257679]: 2025-11-28 18:37:47.041543074 +0000 UTC m=+0.097788152 container health_status 051a6c35f410beca589982c7acf43e7bbb7bc257397a82c3060ab675029c6f13 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, container_name=openstack_network_exporter, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 28 18:37:47 compute-0 podman[257681]: 2025-11-28 18:37:47.045923961 +0000 UTC m=+0.092131295 container health_status bee16e8c64be82db2e67721c4fccd6cc311477d962a68cade377ca65a4d2f1cc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 28 18:37:47 compute-0 podman[257680]: 2025-11-28 18:37:47.049664502 +0000 UTC m=+0.097012134 container health_status 210f7dce98baa3597cfef9b52a16f01815d135d6233060f14c54405d2852c066 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=f26160204c78771e78cdd2489258319b, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 28 18:37:49 compute-0 nova_compute[189296]: 2025-11-28 18:37:49.258 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:51 compute-0 nova_compute[189296]: 2025-11-28 18:37:51.695 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:37:52.657 106624 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 28 18:37:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:37:52.657 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 28 18:37:52 compute-0 ovn_metadata_agent[106619]: 2025-11-28 18:37:52.658 106624 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 28 18:37:54 compute-0 podman[257740]: 2025-11-28 18:37:54.036879132 +0000 UTC m=+0.089812569 container health_status f1f6b4ac151d1472ce44b61b733fb8135823e68ea8c71f96512461c7c4fba4b7 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_id=edpm, container_name=kepler, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, release=1214.1726694543)
Nov 28 18:37:54 compute-0 podman[257738]: 2025-11-28 18:37:54.044947788 +0000 UTC m=+0.098056790 container health_status 28981f1cb9c8d66f1a79546908ca7eae0fb81bfa0e58718993516fef9c2062cc (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 28 18:37:54 compute-0 podman[257739]: 2025-11-28 18:37:54.062623769 +0000 UTC m=+0.110451652 container health_status b989d9b0f3ca51d70a7f52dd35a32d8957f1c1b1c375478261c4f98af659bf7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 28 18:37:54 compute-0 podman[257741]: 2025-11-28 18:37:54.07174773 +0000 UTC m=+0.119482290 container health_status fe0b82f102f29610e01aebe8ca7828cf1e5b4a8c4925ee653007169c27f665e1 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 28 18:37:54 compute-0 nova_compute[189296]: 2025-11-28 18:37:54.261 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:56 compute-0 systemd-logind[790]: New session 30 of user zuul.
Nov 28 18:37:56 compute-0 systemd[1]: Started Session 30 of User zuul.
Nov 28 18:37:56 compute-0 podman[257817]: 2025-11-28 18:37:56.278374284 +0000 UTC m=+0.101454262 container health_status 3e6f0311bb8fdc7a8d74e81fe6eef50297fe2ddade6bff1aef80fcddea1a1fb3 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 28 18:37:56 compute-0 nova_compute[189296]: 2025-11-28 18:37:56.698 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:59 compute-0 nova_compute[189296]: 2025-11-28 18:37:59.265 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:37:59 compute-0 podman[203494]: time="2025-11-28T18:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 28 18:37:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 28 18:37:59 compute-0 podman[203494]: @ - - [28/Nov/2025:18:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4324 "" "Go-http-client/1.1"
Nov 28 18:38:01 compute-0 openstack_network_exporter[205632]: ERROR   18:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 28 18:38:01 compute-0 openstack_network_exporter[205632]: ERROR   18:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:38:01 compute-0 openstack_network_exporter[205632]: ERROR   18:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 28 18:38:01 compute-0 openstack_network_exporter[205632]: ERROR   18:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 28 18:38:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:38:01 compute-0 openstack_network_exporter[205632]: ERROR   18:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 28 18:38:01 compute-0 openstack_network_exporter[205632]: 
Nov 28 18:38:01 compute-0 ovs-vsctl[258011]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 28 18:38:01 compute-0 nova_compute[189296]: 2025-11-28 18:38:01.700 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:38:02 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 257867 (sos)
Nov 28 18:38:02 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 28 18:38:02 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 28 18:38:02 compute-0 virtqemud[189019]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 28 18:38:02 compute-0 virtqemud[189019]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 28 18:38:02 compute-0 virtqemud[189019]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 28 18:38:03 compute-0 podman[258294]: 2025-11-28 18:38:03.188959046 +0000 UTC m=+0.087384889 container health_status 27aa3fbe25ffd3dd64ec9dc236daecfb5b4142172c9d9558d30b01af222b8f95 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 28 18:38:04 compute-0 nova_compute[189296]: 2025-11-28 18:38:04.267 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 28 18:38:06 compute-0 systemd[1]: Starting Hostname Service...
Nov 28 18:38:06 compute-0 systemd[1]: Started Hostname Service.
Nov 28 18:38:06 compute-0 nova_compute[189296]: 2025-11-28 18:38:06.704 189300 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
